WO2016202890A1 - Media streaming - Google Patents
Media streaming Download PDFInfo
- Publication number
- WO2016202890A1 WO2016202890A1 PCT/EP2016/063806 EP2016063806W WO2016202890A1 WO 2016202890 A1 WO2016202890 A1 WO 2016202890A1 EP 2016063806 W EP2016063806 W EP 2016063806W WO 2016202890 A1 WO2016202890 A1 WO 2016202890A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- captured data
- metadata
- server
- streams
- stream
- Prior art date
Links
- 238000000034 method Methods 0.000 claims description 185
- 238000012913 prioritisation Methods 0.000 claims description 43
- 230000005540 biological transmission Effects 0.000 claims description 9
- 230000001360 synchronised effect Effects 0.000 claims description 7
- 230000001419 dependent effect Effects 0.000 claims description 5
- 230000008569 process Effects 0.000 description 53
- 238000004458 analytical method Methods 0.000 description 29
- 238000012545 processing Methods 0.000 description 22
- 230000003190 augmentative effect Effects 0.000 description 18
- 238000013459 approach Methods 0.000 description 14
- 230000008859 change Effects 0.000 description 11
- 230000006854 communication Effects 0.000 description 11
- 238000004891 communication Methods 0.000 description 11
- 239000000463 material Substances 0.000 description 11
- 238000013519 translation Methods 0.000 description 11
- 230000014616 translation Effects 0.000 description 11
- 230000006870 function Effects 0.000 description 9
- 238000012384 transportation and delivery Methods 0.000 description 9
- 239000003550 marker Substances 0.000 description 8
- 238000012552 review Methods 0.000 description 8
- 230000009471 action Effects 0.000 description 6
- 230000003416 augmentation Effects 0.000 description 6
- 238000006243 chemical reaction Methods 0.000 description 6
- 230000003993 interaction Effects 0.000 description 6
- 230000002452 interceptive effect Effects 0.000 description 6
- 239000000243 solution Substances 0.000 description 6
- 238000013518 transcription Methods 0.000 description 6
- 230000035897 transcription Effects 0.000 description 6
- 230000006399 behavior Effects 0.000 description 5
- 230000007175 bidirectional communication Effects 0.000 description 5
- 239000000872 buffer Substances 0.000 description 5
- 235000019640 taste Nutrition 0.000 description 5
- 230000009286 beneficial effect Effects 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 4
- 230000006978 adaptation Effects 0.000 description 3
- 230000003044 adaptive effect Effects 0.000 description 3
- 230000000052 comparative effect Effects 0.000 description 3
- 238000001914 filtration Methods 0.000 description 3
- 230000002401 inhibitory effect Effects 0.000 description 3
- 239000011159 matrix material Substances 0.000 description 3
- 238000012986 modification Methods 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 230000004044 response Effects 0.000 description 3
- 230000000153 supplemental effect Effects 0.000 description 3
- 230000002123 temporal effect Effects 0.000 description 3
- 235000013405 beer Nutrition 0.000 description 2
- 230000008901 benefit Effects 0.000 description 2
- 238000001514 detection method Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000011156 evaluation Methods 0.000 description 2
- 230000036651 mood Effects 0.000 description 2
- RLLPVAHGXHCWKJ-UHFFFAOYSA-N permethrin Chemical compound CC1(C)C(C=C(Cl)Cl)C1C(=O)OCC1=CC=CC(OC=2C=CC=CC=2)=C1 RLLPVAHGXHCWKJ-UHFFFAOYSA-N 0.000 description 2
- 230000035755 proliferation Effects 0.000 description 2
- 230000000717 retained effect Effects 0.000 description 2
- 238000012360 testing method Methods 0.000 description 2
- 230000000699 topical effect Effects 0.000 description 2
- 206010015548 Euthanasia Diseases 0.000 description 1
- 241000282326 Felis catus Species 0.000 description 1
- 241001501942 Suricata suricatta Species 0.000 description 1
- 238000013475 authorization Methods 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 230000001427 coherent effect Effects 0.000 description 1
- 238000004590 computer program Methods 0.000 description 1
- 238000012790 confirmation Methods 0.000 description 1
- 238000013481 data capture Methods 0.000 description 1
- 230000003111 delayed effect Effects 0.000 description 1
- 230000004069 differentiation Effects 0.000 description 1
- 230000008451 emotion Effects 0.000 description 1
- 230000008921 facial expression Effects 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000000670 limiting effect Effects 0.000 description 1
- 238000007726 management method Methods 0.000 description 1
- 235000013372 meat Nutrition 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 230000008520 organization Effects 0.000 description 1
- 238000010422 painting Methods 0.000 description 1
- 230000000135 prohibitive effect Effects 0.000 description 1
- 238000010223 real-time analysis Methods 0.000 description 1
- 230000002441 reversible effect Effects 0.000 description 1
- 238000012163 sequencing technique Methods 0.000 description 1
- 230000005236 sound signal Effects 0.000 description 1
- 238000012358 sourcing Methods 0.000 description 1
- 238000010183 spectrum analysis Methods 0.000 description 1
- 230000002269 spontaneous effect Effects 0.000 description 1
- 239000012086 standard solution Substances 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
- 230000002747 voluntary effect Effects 0.000 description 1
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/4302—Content synchronisation processes, e.g. decoder synchronisation
- H04N21/4307—Synchronising the rendering of multiple content streams or additional data on devices, e.g. synchronisation of audio on a mobile phone with the video output on the TV screen
- H04N21/43072—Synchronising the rendering of multiple content streams or additional data on devices, e.g. synchronisation of audio on a mobile phone with the video output on the TV screen of multiple content streams on the same device
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/25—Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
- H04N21/262—Content or additional data distribution scheduling, e.g. sending additional data at off-peak times, updating software modules, calculating the carousel transmission frequency, delaying a video stream transmission, generating play-lists
- H04N21/26208—Content or additional data distribution scheduling, e.g. sending additional data at off-peak times, updating software modules, calculating the carousel transmission frequency, delaying a video stream transmission, generating play-lists the scheduling operation being performed under constraints
- H04N21/26233—Content or additional data distribution scheduling, e.g. sending additional data at off-peak times, updating software modules, calculating the carousel transmission frequency, delaying a video stream transmission, generating play-lists the scheduling operation being performed under constraints involving content or additional data duration or size, e.g. length of a movie, size of an executable file
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04H—BROADCAST COMMUNICATION
- H04H60/00—Arrangements for broadcast applications with a direct linking to broadcast information or broadcast space-time; Broadcast-related systems
- H04H60/35—Arrangements for identifying or recognising characteristics with a direct linkage to broadcast information or to broadcast space-time, e.g. for identifying broadcast stations or for identifying users
- H04H60/46—Arrangements for identifying or recognising characteristics with a direct linkage to broadcast information or to broadcast space-time, e.g. for identifying broadcast stations or for identifying users for recognising users' preferences
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L65/00—Network arrangements, protocols or services for supporting real-time applications in data packet communication
- H04L65/60—Network streaming of media packets
- H04L65/70—Media network packetisation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/21—Server components or server architectures
- H04N21/218—Source of audio or video content, e.g. local disk arrays
- H04N21/21805—Source of audio or video content, e.g. local disk arrays enabling multiple viewpoints, e.g. using a plurality of cameras
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/21—Server components or server architectures
- H04N21/218—Source of audio or video content, e.g. local disk arrays
- H04N21/2187—Live feed
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/231—Content storage operation, e.g. caching movies for short term storage, replicating data over plural servers, prioritizing data for deletion
- H04N21/23106—Content storage operation, e.g. caching movies for short term storage, replicating data over plural servers, prioritizing data for deletion involving caching operations
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/232—Content retrieval operation locally within server, e.g. reading video streams from disk arrays
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/234—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
- H04N21/23418—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/234—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
- H04N21/2343—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/234—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
- H04N21/2343—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
- H04N21/234336—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements by media transcoding, e.g. video is transformed into a slideshow of still pictures or audio is converted into text
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/234—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
- H04N21/2343—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
- H04N21/23439—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements for generating different versions
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/235—Processing of additional data, e.g. scrambling of additional data or processing content descriptors
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/235—Processing of additional data, e.g. scrambling of additional data or processing content descriptors
- H04N21/2353—Processing of additional data, e.g. scrambling of additional data or processing content descriptors specifically adapted to content descriptors, e.g. coding, compressing or processing of metadata
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/235—Processing of additional data, e.g. scrambling of additional data or processing content descriptors
- H04N21/2355—Processing of additional data, e.g. scrambling of additional data or processing content descriptors involving reformatting operations of additional data, e.g. HTML pages
- H04N21/2358—Processing of additional data, e.g. scrambling of additional data or processing content descriptors involving reformatting operations of additional data, e.g. HTML pages for generating different versions, e.g. for different recipient devices
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/24—Monitoring of processes or resources, e.g. monitoring of server load, available bandwidth, upstream requests
- H04N21/2402—Monitoring of the downstream path of the transmission network, e.g. bandwidth available
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/24—Monitoring of processes or resources, e.g. monitoring of server load, available bandwidth, upstream requests
- H04N21/2407—Monitoring of transmitted content, e.g. distribution time, number of downloads
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/242—Synchronization processes, e.g. processing of PCR [Program Clock References]
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/25—Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
- H04N21/254—Management at additional data server, e.g. shopping server, rights management server
- H04N21/2541—Rights Management
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/25—Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
- H04N21/258—Client or end-user data management, e.g. managing client capabilities, user preferences or demographics, processing of multiple end-users preferences to derive collaborative data
- H04N21/25866—Management of end-user data
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/25—Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
- H04N21/262—Content or additional data distribution scheduling, e.g. sending additional data at off-peak times, updating software modules, calculating the carousel transmission frequency, delaying a video stream transmission, generating play-lists
- H04N21/26208—Content or additional data distribution scheduling, e.g. sending additional data at off-peak times, updating software modules, calculating the carousel transmission frequency, delaying a video stream transmission, generating play-lists the scheduling operation being performed under constraints
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/25—Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
- H04N21/266—Channel or content management, e.g. generation and management of keys and entitlement messages in a conditional access system, merging a VOD unicast channel into a multicast channel
- H04N21/26613—Channel or content management, e.g. generation and management of keys and entitlement messages in a conditional access system, merging a VOD unicast channel into a multicast channel for generating or managing keys in general
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/25—Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
- H04N21/266—Channel or content management, e.g. generation and management of keys and entitlement messages in a conditional access system, merging a VOD unicast channel into a multicast channel
- H04N21/2665—Gathering content from different sources, e.g. Internet and satellite
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/25—Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
- H04N21/266—Channel or content management, e.g. generation and management of keys and entitlement messages in a conditional access system, merging a VOD unicast channel into a multicast channel
- H04N21/2668—Creating a channel for a dedicated end-user group, e.g. insertion of targeted commercials based on end-user profiles
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/41—Structure of client; Structure of client peripherals
- H04N21/414—Specialised client platforms, e.g. receiver in car or embedded in a mobile appliance
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/41—Structure of client; Structure of client peripherals
- H04N21/414—Specialised client platforms, e.g. receiver in car or embedded in a mobile appliance
- H04N21/41407—Specialised client platforms, e.g. receiver in car or embedded in a mobile appliance embedded in a portable device, e.g. video client on a mobile phone, PDA, laptop
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/41—Structure of client; Structure of client peripherals
- H04N21/422—Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS]
- H04N21/4223—Cameras
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/4302—Content synchronisation processes, e.g. decoder synchronisation
- H04N21/4305—Synchronising client clock from received content stream, e.g. locking decoder clock with encoder clock, extraction of the PCR packets
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/4302—Content synchronisation processes, e.g. decoder synchronisation
- H04N21/4307—Synchronising the rendering of multiple content streams or additional data on devices, e.g. synchronisation of audio on a mobile phone with the video output on the TV screen
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/433—Content storage operation, e.g. storage operation in response to a pause request, caching operations
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/433—Content storage operation, e.g. storage operation in response to a pause request, caching operations
- H04N21/4331—Caching operations, e.g. of an advertisement for later insertion during playback
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/433—Content storage operation, e.g. storage operation in response to a pause request, caching operations
- H04N21/4334—Recording operations
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/44—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
- H04N21/4402—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
- H04N21/440236—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display by media transcoding, e.g. video is transformed into a slideshow of still pictures, audio is converted into text
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/44—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
- H04N21/4402—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
- H04N21/440263—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display by altering the spatial resolution, e.g. for displaying on a connected PDA
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/44—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
- H04N21/4402—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
- H04N21/44029—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display for generating different versions
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/442—Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed, the storage space available from the internal hard disk
- H04N21/44204—Monitoring of content usage, e.g. the number of times a movie has been viewed, copied or the amount which has been watched
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/443—OS processes, e.g. booting an STB, implementing a Java virtual machine in an STB or power management in an STB
- H04N21/4431—OS processes, e.g. booting an STB, implementing a Java virtual machine in an STB or power management in an STB characterized by the use of Application Program Interface [API] libraries
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/47—End-user applications
- H04N21/482—End-user interface for program selection
- H04N21/4828—End-user interface for program selection for searching program descriptors
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/60—Network structure or processes for video distribution between server and client or between remote clients; Control signalling between clients, server and network components; Transmission of management data between server and client, e.g. sending from server to client commands for recording incoming content stream; Communication details between server and client
- H04N21/63—Control signaling related to video distribution between client, server and network components; Network processes for video distribution between server and clients or between remote clients, e.g. transmitting basic layer and enhancement layers over different transmission paths, setting up a peer-to-peer communication via Internet between remote STB's; Communication protocols; Addressing
- H04N21/633—Control signals issued by server directed to the network components or client
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/60—Network structure or processes for video distribution between server and client or between remote clients; Control signalling between clients, server and network components; Transmission of management data between server and client, e.g. sending from server to client commands for recording incoming content stream; Communication details between server and client
- H04N21/63—Control signaling related to video distribution between client, server and network components; Network processes for video distribution between server and clients or between remote clients, e.g. transmitting basic layer and enhancement layers over different transmission paths, setting up a peer-to-peer communication via Internet between remote STB's; Communication protocols; Addressing
- H04N21/633—Control signals issued by server directed to the network components or client
- H04N21/6332—Control signals issued by server directed to the network components or client directed to client
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/80—Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
- H04N21/81—Monomedia components thereof
- H04N21/8166—Monomedia components thereof involving executable data, e.g. software
- H04N21/8186—Monomedia components thereof involving executable data, e.g. software specially adapted to be executed by a peripheral of the client device, e.g. by a reprogrammable remote control
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/80—Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
- H04N21/83—Generation or processing of protective or descriptive data associated with content; Content structuring
- H04N21/835—Generation of protective data, e.g. certificates
- H04N21/8358—Generation of protective data, e.g. certificates involving watermark
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/80—Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
- H04N21/83—Generation or processing of protective or descriptive data associated with content; Content structuring
- H04N21/84—Generation or processing of descriptive data, e.g. content descriptors
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/80—Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
- H04N21/85—Assembly of content; Generation of multimedia applications
- H04N21/854—Content authoring
- H04N21/8547—Content authoring involving timestamps for synchronizing content
Definitions
- the present invention relates to the provision and/or the use of metadata in conjunction with associated content .
- the metadata is data describing or defining attributes of the data.
- a central server provides services for data received from multiple sources
- a system in which a central server receives data streamed from many devices and makes multiple streams available for consumption by other devices is an example of a scenario where the identification and viewing of relevant content for the other device can be difficult.
- a central server provides captured content to be viewed
- the central server receives multiple data streams it can be difficult for any editing or moderation of the content of received streams to be applied, in order for these streams to then be viewed live by other devices.
- An example implementation of a system with a central server concerns the capture of images and the generation of associated video streams in a video streaming environment comprising a plurality of capture devices (such as mobile phones) connected to a streaming server via one or more networks such as the Internet.
- the streaming server may generate one or more further video streams for viewing by one or more viewing devices.
- a system for providing streaming services comprising: a plurality of capture devices, each for capturing data and providing a captured data stream; and a server, for receiving the plurality of captured data streams; wherein each capture device is configured to generate metadata for the captured data, and transmit said metadata to the server.
- the metadata may be transmitted to the server in the captured data stream.
- the metadata may be transmitted to the server on one or more metadata streams additional to the captured data stream.
- the metadata may be synchronised with the associated captured data for transmission from the capture device.
- the metadata and the captured data may be associated with a common time line, wherein the server determines the time line and synchronises the captured data and the metadata based on the time line.
- the server may determine the timeline from the captured data stream.
- the server may determine the time line from the one or more metadata streams .
- the capture devices may communicate with a further device or server in order to obtain additional data for the captured data stream.
- the capture device may communicate with a further device or server to transcribe the captured data stream.
- the further device or server may provide a geo-location for the capture device.
- the system may further comprise a viewing device, the server outputting a viewing stream to the viewing device.
- One or more captured data streams may be routed to the viewing device in dependence on the metadata associated with the captured data streams.
- the viewing device may be associated with a moderator of content or an editor of content. In this aspect there may be provided corresponding methods and processes.
- Video streams may be grouped which have a common location or a common time or matching or overlapping time lines.
- the location and time may be provided as meta-data at the capture device.
- Other tags can be added as meta ⁇ data, e.g. tags identifying somebody that has requested to be filmed so any stream identifying them can be grouped .
- the existing video stream may be augmented or one or more additional augmented video streams may be created.
- the augmented video stream, or the video stream and the one or more augmented video streams may be transmitted to the server. Where a video stream is augmented, and an additional augmented video stream is created, the metadata associated with the video stream is also modified, to reflect the augmentation.
- the augmentation of the video stream may be achieved by transmitting the video stream to a processing server which processes the video stream.
- the processing may comprise speech-to-text conversion, and may comprise providing the video stream to a speech-to-text recognition module.
- the processing server transmits the results of its processing back to the capture device.
- An augmented video stream is then available from the capture device, and the metadata for the video stream is suitably modified to reflect the speech-to-text conversion.
- the metadata derived from the stream is sent back to the capture device.
- the processing may comprise providing video or audio of the video stream to a cloud-based speech-to-text recognition service.
- An augmented video stream may be a video stream with subtitles.
- the metadata that is generated for the augmented video stream may be transmitted from the capture device to the streaming server together with the original video stream.
- the metadata may be provided in a metadata stream separate to the video stream.
- the method may comprise providing the data stream and/or the augmented video stream based on the metadata to one or more viewing devices.
- the video stream and the augmented video stream may be provided to different viewing devices.
- the augmented video stream may comprise the video stream with additional data.
- the additional data may be meta-data.
- the meta-data may be used for grouping the video stream.
- the meta-data may be used for recommendations/ invitations/ searching/ searching within content .
- the creation of metadata associated with the additional or augmented content helps improve the discoverability of the content at a server.
- a system for providing streaming services comprising: a plurality of capture devices, each for capturing data and providing a captured data stream; and a server, for receiving the plurality of captured data streams and outputting at least one output stream, wherein the server is configured to dynamically group the captured data streams in dependence on metadata associated with the captured data streams .
- the metadata for a captured data stream may be received from the associated capture device.
- the metadata for a captured data stream may be determined at the server.
- the server may be configured to dynamically group the streams in dependence on a current definition of a group.
- the server may be configured to dynamically group the streams in dependence on the current metadata of the data streams.
- the captured data streams allocated to a group may be output to an editing device.
- the editing device may control the output data stream from the server for the group.
- the captured data streams may be prioritised in dependence on the metadata associated with the captured data streams of the group.
- the captured data streams within each group may be prioritised.
- Control data to be applied to a captured data stream may be provided from an external source.
- the control data may comprise a set of rules.
- the set of rules may define a group.
- the set of rules may be used to allocate data to a group.
- the identification of the groups may be dynamic.
- data streams may be allocated to groups on a dynamic basis in dependence on a current definition of a group. If a definition of a group change, then the allocation of current data streams to that group may change. The definition of a group may change as rules defining a group may change. Secondly, the allocation of a data stream to a group may dynamically change. This may be because the metadata associated with the data stream indicates that the data stream should no longer be allocated to a particular group, for example.
- the dynamic behaviour may or may not be applicable to all the aspects and embodiments herein.
- a method comprising: receiving a plurality of video streams from a plurality of capture devices; and grouping two or more video streams.
- the grouping of video streams may be dependent upon meta-data associated with a video stream.
- the video streams may be received from a plurality of independent capture devices.
- the method may comprise identifying a location associated with the capture device providing the received video stream based on the location provided in the metadata, and grouping video streams according to location.
- the method may comprise identifying a time associated with the video stream, and grouping video streams associated with a predetermined time.
- the method may comprise identifying a direction-of-view of a capture device providing a video stream, and grouping video streams in dependence on the point of view.
- the method may comprise grouping video streams in dependence on common characteristics of video streams, which may be the metadata of the streams.
- the method may comprise receiving one or more characteristic, and grouping video streams associated with that characteristic.
- the characteristic may be received from a user.
- the characteristic may be received from a user of a capture device.
- the characteristic may be received from a user associated with a device which is being used to view a video stream.
- the characteristics may be received from the respective devices, i.e. an automated process without any active input from the users.
- Grouped video streams may be provided to viewing devices associated with the group.
- the compass bearing may be a compass bearing captured from a compass application or function of the capture device.
- a direction of view throughout the stream may be recorded as time live metadata which is associated with the stream. In this way, the direction of view at any moment can be identified.
- the method may involve identifying a user who has requested that he be filmed so that any stream identifying him is grouped.
- Video streams may be grouped which have a common location or a common time or matching time.
- grouping identifies a cluster of streams so they can be grouped and readily noticed.
- the streams can be grouped in dependence on the content and/or the meta-data associated with them.
- the method may comprise grouping two or more video streams in dependence on determined metadata.
- the video streams grouped in dependence on metadata may be addressed to viewing devices which have identified the associated metadata in a configuration step.
- the metadata may define a tag, description, location, or event.
- Watermarking may be utilised with grouping.
- a record may be kept of watermarks belonging to streams which have been allocated to particular groups. This allows a future stream (or clip of a stream) to be checked to see if it was previously allocated to a group.
- the watermark is not a determining factor in how a stream is allocated to a group. It is a method to check if a stream was allocated to a group and if so, which group.
- This aspect may be referred to as video water-marking.
- the filtering may accept or reject a video stream in dependence on the determined video watermark.
- a video stream may be routed for further processing.
- the further processing may be associated with a content rights holder.
- a determined video watermark may identify an association with a rights holder.
- a video stream may be identified as being associated with a captured device in dependence on the presence of the video watermark.
- the method may be implemented in a server which may be adapted to receive captured data streams from registered capture devices, such that a video stream provided by the server for viewing is an authorised video stream.
- the streaming server preferably selects a number of streams which it provides to the viewing device. Any video stream viewed and including the video-watermark may be recognised as having been generated by a particular server .
- a system for providing streaming services comprising: a plurality of capture devices, each for capturing data and providing a captured data stream; and a server, for receiving the plurality of captured data streams and outputting at least one output stream, wherein the server is configured to dynamically prioritise the captured data streams in dependence on metadata associated with the captured data streams .
- the server may be configured to dynamically prioritise the streams in dependence on a current definition for prioritisation.
- the server may be configured to dynamically prioritise a data stream in dependence on the current metadata of that data stream.
- the captured data streams may be grouped, and then within each group the data streams are prioritised in dependence on the metadata associated with the captured data streams.
- the priority of a captured data stream may be used to determine the output of that captured data stream from the server.
- the metadata for a captured data stream may additionally include a prioritisation score.
- the prioritisation score for a captured data stream may be dynamic.
- the prioritisation score may be based on a reputation of a user associated with the capture device.
- the metadata for a captured data stream may additionally include feedback data from a device receiving the output stream from the server. The device having made a request for content from the server, the prioritisation score may be indicative of the relevance of the captured data stream to that request .
- the feedback may be used to adjust a prioritisation score of the captured device data stream.
- the prioritisation score may be a viewer rating.
- the metadata for a captured data stream may additionally include feedforward data based on the capture device from which the captured data stream is provided.
- the feedforward data may be used to adjust a prioritisation score of the captured data stream.
- the prioritisation score may be a capture device rating.
- the captured data streams may be edited in dependence on their priority.
- the server may edit captured data stream based on a set of rules.
- the set of rules may apply to a group.
- the set of rules may be used to allocate a data stream to a group in dependence on the metadata of the data stream.
- the server may additionally edit captured data streams in dependence on a received control signal.
- the metadata for a captured data stream may be received from the associated capture device.
- the metadata for a captured data stream may be determined at the server .
- the server may edit the captured data stream by applying an overlay to the captured data stream.
- the server may group the captured data streams in dependence on the metadata associated with the captured data streams.
- the server may edit the captured data stream by applying an overlay to all captured data streams allocated to a given group.
- the applied overlay may indicate a branding.
- the applied overlay may indicate a rights assignment of the content of the captured data stream.
- Any method or process may be implemented in software. Any method or process may be provided as a computer software code which, when executed on a computer, performs the associated method or process.
- This aspect provides, in an example, a method comprising: receiving a plurality of captured video streams from a plurality of capture devices; extracting metadata associated with the received video streams; analysing the extracted metadata; and applying a priority to each captured data stream in dependence on the analysis of the metadata.
- the identification of the priorities may be dynamic. This dynamic nature of the prioritisation may be manifested in two ways. Firstly, data streams may be allocated to priorities on a dynamic basis in dependence on a current definition of a priority. If a definition of a priority changes, then the allocation of current data streams to that priority may change. The definition of a priority may change as rules defining a priority may change. Secondly, the allocation of a data stream to a priority may dynamically change. This may be because the metadata associated with the data stream indicates that the data stream should no longer be allocated to a particular priority, for example.
- the allocation of priority to streams may be used to provide an ordered list of streams, with those of the highest priority at the top of that ordered list and those of the lowest priority at the bottom of that ordered list.
- the ordered list may then be used for further processing of the streams, with the highest priority streams being processed first. If only a certain number of streams can be processed, then the highest priority streams are processed.
- the method may further comprise a step of moderating the video streams in dependence on the prioritising step.
- the method may further comprise a step of recommending the video streams in dependence on the prioritising step. Rules for moderating or recommending may be set by an event manager.
- the highest priority streams may be moderated or recommended first, and streams generally processed in accordance with the ordered list.
- the moderating may comprise an automated step. All streams may be applied to an auto moderator. Some streams may be allowed through because they comply with rules which have been set for the event. Other streams which do not comply with the rules may not be allowed through, or may be forwarded to a manual moderator.
- the prioritising is used to prioritise the streams (which may be a group of streams) for delivery to the moderator.
- the method may include identifying the source of the video stream.
- the method may include the reputation of the source as a criterion.
- the method may include the reputation of the source in a particular category - e.g. person A has a good reputation for filming outdoor content, but a poor reputation for filing indoor content. If the content person A is currently providing is indoor, then their reputation as measured by this tag is used.
- the method includes identifying the quality of the video stream, and including the quality of the source as a criterion.
- the method includes identifying the compatibility of the video stream with a search criterion, and including the compatibility as a criterion.
- a search criterion may be provided by a viewing device.
- the compatibility with the search criterion may be used to assign a priority to a stream.
- the priority may be used in one or all steps of: moderating the content; reviewing the content; recommending the content; or selecting the content.
- the second and third aspects relating to grouping and prioritisation may also provide a method comprising: receiving at least one captured video stream from a capture device; editing the video stream; and providing the edited video stream. This editing may also be considered as filtering streams.
- Editing the video stream may comprise overlaying content to the video stream.
- the overlaid content may comprise adding a link or content to the video stream.
- the editing may comprise recognising an image in the capture video stream, and editing the captured video stream to remove that image. This step may be performed at a video streaming sever.
- pre-recorded content comprising combining the captured video stream and the pre-recorded content. This step may be performed at a video streaming sever.
- the editing may comprise manipulation of the captured data stream.
- the editing comprising combining the captured video stream and the interactive content, without altering the video data.
- the interactive content may be a donation link.
- the editing may comprise manipulation of the captured data stream to include interactive content.
- the interactive content may be a donation link.
- editing refers to manipulation of the video stream.
- the term does not refer to altering the actual content, i.e. the video data, within the video stream, but to in some way adjusting the video stream presented.
- overlays are applied, they are not applied like special effects. Rather the viewing devices receives the unaltered video together with a set of time coded instructions that described the overlay content. These instructions may be added as metadata to the stream, but may also be provided as a separate resource.
- x edit' is used in a general sense.
- the method may further comprise identifying a plurality of video streams as being associated with each other; retrieving a token from one of the video streams, and editing the video stream by providing only the video stream having the token.
- the method may comprise receiving a request for the token from a capture device, and transmitting the token to the capture device.
- the method may comprise sorting the identity of the capture device to which the token has been transmitted. A plurality of video streams are thus filtered in dependence on a token.
- the method may comprise establishing a session for the provision of the stream, wherein the session is associated with a predetermined time, wherein the step of editing comprises terminating the session when the predetermined time elapses.
- the method may further comprise inhibiting the establishment of a further session with that capture device or a user associated with that capture device for a further predetermined time period once that predetermined time period expires.
- the step of inhibiting may be dependent on the capture location being located in a geolocation associated with the first session.
- the method may comprise identifying the presence of a rectangle in a video stream, and processing the video stream to determine if the video stream comprises protected content, wherein the step of editing comprises inhibiting streams comprising protected content.
- the method may comprise identifying a deformed rectangle and processing the image to provide a rectified rectangle.
- Figure 1 illustrates an exemplary architecture in which described examples may be implemented
- Figure 2 illustrates an exemplary process for generating metadata
- Figure 3 illustrates an exemplary system in which content is augmented
- Figures 4 illustrates an exemplary watermarking process
- Figure 5 illustrates an exemplary system for watermarking
- Figure 6 to 10 each illustrate an exemplary aspect of a watermarking example
- Figure 11 illustrates an exemplary watermarking process
- Figure 12 illustrates an example of a watermarking system
- Figure 13 illustrates an exemplary generation of metadata
- Figures 14 to 16 illustrate an example relating to grouping
- Figure 17 illustrates an example grouping process
- Figure 18 illustrates an example of metadata
- Figure 19 illustrates an example relating to watermarking
- Figure 20 illustrates an example related to prioritisation
- Figure 23 illustrates an example relating to grouping
- Figure 24 illustrates an example relating to priority
- Figure 25 illustrates an exemplary process for handling requests
- Figures 26 and 27 illustrate the exemplary augmentation of data
- Figure 28 to 31 illustrate the exemplary grouping of streams; and Figure 32 illustrates the exemplary prioritisation of streams.
- FIG. 1 there is illustrated: a plurality of devices, labelled capture devices, denoted by reference numerals 12a, 12b, 12c; a plurality of devices, labelled viewing devices, denoted by reference numerals 16a, 16b; a device, labelled editing device, denoted by reference numeral 20a; a network denoted by reference numeral 4; and a server denoted by reference numeral 2.
- a plurality of devices labelled capture devices, denoted by reference numerals 12a, 12b, 12c
- FIG. 1 there is illustrated: a plurality of devices, labelled viewing devices, denoted by reference numerals 16a, 16b; a device, labelled editing device, denoted by reference numeral 20a; a network denoted by reference numeral 4; and a server denoted by reference numeral 2.
- Each of the devices 12a, 12b, 12c is referred to as a capture device as in the described embodiments of the invention the devices capture content.
- the devices are not limited to capturing content, and may have other functionality and purposes.
- each capture device 12a, 12b 12c may be a mobile device such as a mobile phone.
- Each of the capture devices 12a, 12b, 12c may capture an image utilising a preferably integrated image capture device (such as a video camera) , and may thus generate a video stream on a respective communication line 14a, 14b, 14c.
- the respective communication lines 14a, 14b, 14c provide inputs to the network 4, which is preferably a public network such as the Internet.
- the communication lines 14a, 14b, 14c are illustrated as bi ⁇ directional, to show that the capture devices 12a, 12b, 12c may receive signals as well as generate signals.
- the server 2 is configured to receive inputs from the capture devices 12a, 12b, 12c as denoted by the bi ⁇ directional communication lines 6, connected between the server 2 and the network 4. In embodiments, the server 2 receives a plurality of video streams from the capture devices, as the signals on lines 14a, 14b, 14c are video streams .
- the server 2 may process the video streams received from the capture devices as will be discussed further hereinbelow.
- the server 2 may generate further video streams on bi-directional communication line 6 to the network 4, to the bi-directional communication lines 18a, 18b, associated with the devices 16a, 16b respectively.
- Each of the devices 16a, 16b is referred to as a viewing device as in the described embodiments of the invention the devices allow content to be viewed. However the devices are not limited to providing viewing of content, and may have other functionality and purposes. In examples each viewing device 16a, 16b may be a mobile device such as a mobile phone.
- the viewing devices 16a and 16b may be associated with a display (preferably an integrated display) for viewing the video streams provided on the respective communication lines 18a, 18b.
- a single device may be both a capture device and a viewing device.
- a mobile phone device may be enabled in order to operate as both a capture device and a viewing device.
- a device operating as a capture device may generate multiple video streams, such that a capture device such as capture device 12a may be connected to the network 4 via multiple video streams, with multiple video streams being provided on communication line 14a.
- a viewing device may be arranged in order to receive multiple video streams.
- a viewing device such as viewing device 16a may be arranged to receive multiple video streams on communication line 18a.
- a single device may be a capture device providing multiple video streams and may be a viewing device receiving multiple video streams.
- Each capture device and viewing device is connected to the network 4 with a bi-directional communication link, and thus one or all of the viewing devices 16A, 16B may provide a signal to the network 6 in order to provide a feedback or control signal to the server 2.
- the server 2 may provide control signals to the network 4 in order to provide control signals to one or more of the capture devices 12a, 12b, 12c.
- the capture devices 12a, 12b, 12c are preferably independent of each other, and are independent of the server 2.
- the viewing devices 16a, 16b are preferably independent of each other, and are independent of the server 2.
- the capture devices 12a, 12b, 12b are shown in
- Figure 1 as communicating with the server 2 via a single network 4.
- the capture devices 12a, 12b, 12c may be connected to the server 2 via multiple networks, and there may not be a common network path for the multiple capture devices to the server 2.
- the viewing devices 16a, 16b may be connected to the server 2 via multiple networks, and there may not be a single common network path from the server 2 to the viewing devices 16a, 16b.
- the editing device 20 connected to the network 4 by bi-directional communication link 22.
- the editing device which may additionally function as a capture device and/or a viewing device, may be used to provide additional control information to the server 2.
- Figure 1 further illustrates two metadata service blocks 11a and lib, each respectively connected to the capture devices 12a and 12b via bi-directional communication links. The operation of these blocks will be described further hereinbelow.
- Capture devices such as devices 12a, 12b, 12c typically are used by a user (who may also be referred to as a contributor) to capture an event.
- a capture device may be equipped with a video camera, and the contributor may use the capture device to video content.
- the thus captured video content is transmitted from the capture device to the network as a capture stream.
- a mobile phone is an example of a capture device .
- the capture stream comprising the captured data
- metadata is generated when a content stream is received.
- a service operating on the server 2 may generate metadata for a data content stream which is received from one of the capture devices.
- the capture devices 12a In a described example, the capture devices 12a,
- 12b, 12c are configured in order to generate the metadata for captured data, and then transmit this metadata to the content service running on the server 2 in addition to the data content itself.
- a capture device (such as capture device 12a) captures images.
- This for example may be a video device associated with the capture device 12a capturing video images of an event taking place in the location of the contributor associated with the capture device 12a.
- a data stream is generated for transmission on communication line 14a from capture device 12a, so that the data stream is delivered to the server 2 via the network 4.
- the data stream may be utilised to provide a live data stream of the event being videoed, and therefore the captured data stream is transmitted as a live representation of the event .
- the capture device 12a generates metadata for the captured data stream.
- the metadata generated will be implementation dependent.
- the metadata generated for the data content may include a tag identifying the location of the capture device, a tag identifying the identity of the capture device, a tag identifying the identity of the contributor etc.
- the metadata may include time information, such as a start time associated with the data capture, and/or a time of day.
- the metadata may also include the termination time of the captured data, or the duration time of the captured data.
- the capture device 12a may also be configured to allow the contributor to provide additional information to be tagged as metadata using a user interface of the capture device 12a. However metadata may be created and generated automatically.
- the capture device such as capture device 12a may also transmit a request for data to a third party service, hosted on a separate device.
- a third party service hosted on a separate device.
- capture device 12a may communicate with a data service 11a
- capture device 12b may communicate with a data service lib.
- the capture devices can communicate with such third party services and obtain additional data, and then associated metadata for the content is created by the capture device.
- a third party service provided by data service such as 11a or lib may provide a transcription service.
- Voice recognition functionality may be provided by the service, in order to determine either what the voice content of the capture data is, or to determine the voice content of the contributor.
- a voice signal of the contributor may be provided separately to the data service, and may be transcribed in order to provide additional data content for captured data.
- the addition (or augmentation) to metadata due to such additional content provided by such a service helps improve the discoverability of the data content when it is provided as a large pool of content at the server 2.
- Figure 3 illustrates the augmentation of data.
- the capture device in this example the capture device 12a, is connected to the server 2 as in Figure 1.
- the capture device 12a is illustrated as connected to both data services 11a and lib of Figure 1.
- the data service 11a is a 3 rd party speech-to-text conversion entity
- the data service lib is a 3 rd party video analysis entity.
- the capture device 12a provides an audio stream to the 3 rd party speech-to-text entity 11a, and receives timed text back.
- the capture device 12a provides an audio-video (AV) stream to the 3 rd party video analysis entity lib, and receives timed metadata back.
- AV audio-video
- the capture device 12a may provide the original data stream (before augmentation) as a low quality (LQ) stream to the server 2, a high quality stream with extra metadata to the server, and a separate metadata only stream for late-arriving analysis to the server 2.
- LQ low quality
- the metadata derived from the stream is sent back to the capture device as shown in Figure 3.
- the system may use speech-to-text conversion to determine what in the captured video stream is being said.
- the system may send the captured video stream to a processing server to receive a transcribed version of the audio signal of the captured stream.
- This transcribed version of the captured video stream may be used to generate further metadata which reflects what has been said in the captured video stream.
- the generation of the further metadata can be accomplished at the capture device.
- a user of the capture device as well as someone using a so-called director application on a second device used to curate the captured video stream which is to be published as a viewing stream may accept, edit or reject any transcribed text and/or the further generated metadata before it is accessible by the viewing stream.
- the reach (i.e. discoverability) of the captured video streams can be broadened without any significant resource overhead.
- This method may improve the accessibility of the captured video stream which, for some provider or publisher of the captured video stream, may be highly desirable or, in some cases, a legal requirement.
- Storing transcribed text and the further metadata generated based on the transcribed text also makes the video stream more searchable either in real time for viewers to find streams live right now or, where streams are saved for on-demand playback, when searching the archive .
- the method may also be used by the user of the capture device, which might be a contributor, to broaden the reach of the captured video streams.
- the method may help to improve discoverability when a user of a capture device is contributing captured video streams to a large pool of material.
- the method may also be used to enable the consumption of the viewing stream without necessarily having the audio on.
- Certain kinds of viewers of video streams are watching while listening to other things (e.g. music) .
- By adding the audio transcription automatically to and/or generating and adding further metadata reflecting the audio transcription of live content there may be brought a higher level of convenience to the consumption of this type of viewing stream. It may be easier for viewers to find live or on- demand content if it can be searched by what is being said .
- speech recognition may be used to determine what any given contributor is saying.
- Each such contributor, or someone using a so-called irector app' to curate a published stream may accept, edit or reject any transcribed text before it is accessible to viewers.
- a large pool of material e.g. a citizen reporter use case
- Certain kinds of content are watched while listening to other things (e.g. music) .
- Adding audio transcription automatically to live content brings this level of convenience to the consumption of this content type. Easier to find live or on-demand content if it can be searched by what's being said.
- Cloud sourced translation of content may be used as a method of creating metadata.
- the method may be used to generate a large quantity of live content (e.g. news, current affairs, business) and to offer the captured live video streams to viewers in a number of languages. It may be used to offer them in combination with further metadata that provides the information about the captured video stream in alternative languages.
- the method may also be applied to augment captured video stream and the associated metadata with subtitles and/or audio description and to provide them to viewers who are hard on hearing or with sight problems .
- a broadcaster generates a large quantity of live content (e.g. news, current affairs, business). They wish to offer this content to viewers in a number of languages but lack the resources to hire dedicated staff to provide real-time translations. While the live content itself may or may not be distributed to end-users via our system, the content could be provided to a distributed network of foreign language speakers who are able to give live translations. These contributions can then be used to augment the broadcaster' s original language offering either with spoken word translations or subtitles. This adds the ability to cloud-source different languages.
- a metadata service which may be provided by blocks 11a or lib is a cloud sourced translation of content. If, for example, a broadcaster generates a large quantity of live content (such as news, current affairs, business) , they may wish to offer this content to viewers in a number of languages. However they may lack the resources to hire dedicated staff to provide real time translations.
- live content itself may be provided into the content service of the server 2
- content may also be distributed to a network of foreign language speakers who are able to give live translations. These contributions can then be used to augment the broadcasters' original language offering either with spoken word translations or subtitles.
- the capture device receives a response to its request to third party services, and includes any response it receives in the metadata for the content being streamed.
- the capture device transmits a data stream, and in step 94 the capture device also transmits a metadata stream.
- the metadata stream may be transmitted in conjunction with the data stream, for example using multiplexing.
- the metadata may be transmitted on a stream separate to the data stream.
- the metadata may also be transmitted on multiple separate streams in addition to the data stream.
- the metadata and the captured data may be synchronised, particularly where the metadata is automatically generated.
- the server 2 is able to recover any synchronisation information, and synchronise the metadata with the associated data content.
- the metadata and captured data may be associated with a common timeline, and the server 2 is able to determine this timeline and synchronise the captured date and the metadata.
- the timeline may be provided by the captured data stream or from one or more of the metadata streams .
- the metadata associated with data content is utilised by the content service associated with the server 2, in order to generate viewing streams for viewing devices 16a, 16b.
- the captured data streams from the capture devices are thus routed to the viewing devices in dependence on the metadata associated with the capture data streams.
- the viewing device may be a moderator of content or an editor of content, which uses the metadata in order to moderate or edit the content provided on a viewing stream.
- a capture device may also provide additional control level information for the content data stream by, for instance, using "watermarking".
- the watermarking is a type of metadata, and may be used, for example, to provide control level information.
- a step 100 the video is captured.
- a step 102 a video watermark stored in the capture device is applied to the video.
- the video is transmitted in a data stream with the watermark applied.
- the watermark provides additional metadata for the content.
- a capture device such as capture device 12a may additionally be provided with a video camera 70, a content capture monitor module 92, a video capture module 76, a mixing module 80, a memory 82 including a watermark store 84, and a wireless transmission module 90.
- the video camera 70 is the video camera of the capture device which films an image, and it generates a signal on line 74 to a video capture module 76.
- the video capture module 76 generates the video images on line 78 to the mixing module 80.
- the mixing module 80 receives watermark information from the watermark store section 84 of the memory 82.
- the mixing module 80 is therefore able to generate a copy of the video images on signal lines 88 which include the watermark, which may be referred to as watermarked video.
- the watermarked video is provided to a wireless transmission module 90 for transmission as the data stream.
- the watermark encodes information that identifies the device on which the content was captured, the owner of the device, and a moment-by-moment reflection of the time at which the content was captured.
- All streams are watermarked. All streams have metadata that describes the content. Rights are assigned to streams that have particular metadata characteristics. These rights are checked by detecting the watermark and determining which rights were assigned.
- Rights to content originating from certain public events such as sports matches, tournaments, music concerts and festivals, amongst others, are typically held by broadcasters, publishers and other similar organisations .
- Watermarking is a process by which content is altered in such a way that, while imperceptible to the intended audience, is detectable by specialised software.
- Such watermarks can be based on a manipulation of the audio, video or both. In our case, adding a watermark to the video seems the most sensible since this is the primary sense of the content and, therefore, the part most likely to need protecting.
- our intention here is, instead, to create a system by which the recording device (the smartphone or tablet) adds the watermark itself.
- the watermarking process must be lightweight enough not to create a significant processing overhead on the consumer device while being computationally complex enough to be difficult to fake.
- consumer devices all provide fast GPU- based routines for manipulating it making the technique practical.
- the server Having watermarked each stream, the server maintains a list of those streams which have been categorised as belonging to a rights holder' s event and, further, which have been approved or not.
- Streams not belonging to any event are allowed through en masse, as are those which are assigned and approved. Those which are not approved are not published.
- a piece of content is discovered on the Internet, in whatever form, it can be analysed to determine if it may belong to a rights holder's event. The techniques for doing so are covered elsewhere.
- Identified streams are then checked for a watermark.
- Streams without watermarks are either submitted for manual checks to ensure they are of the event before a take-down notice is served, or used to issue such notices immediately if there is a high level of confidence in the event inclusion assessment.
- Streams with watermarks can be checked to see if that watermark was an approved one for that event. Unapproved ones follow the process described immediately above; approved ones are fine to be left alone.
- the system offers an automated process for watermarking, grouping streams into events and validating content as either having been expressly permitted or not.
- Watermarks are preferably added to audio-visual (AV) content at the point of capture by the capture device itself, e.g. by an iPhone. See the architecture sketch at the end.
- AV audio-visual
- Watermarks are small amounts data embedded into the audio, video or both aspects of captured content. They are added to the content in such a way that they are invisible/inaudible to the audience, difficult to remove but readily detectable technically. Technologies for applying watermarks already exist and typically operate by embedding at a rate of a number of bits per second. So a watermark that is 512 bits long will need twice as much content for one embedding than a 256 bit watermark.
- the watermark data in our case comprises two elements: a watermark identifier (WID) and a timecode (TC) . Each is included with every embedding, with only the TC portion changing.
- WID watermark identifier
- TC timecode
- Each stream has a unique watermark identifier (WID) which is calculated as follows:
- a hash of the values is used, rather than the values themselves, for two reasons:
- the watermarking capability has a number of built in safeguards.
- the WID does not contain any information that would allow the specific user to be identified, nor can the information encoded into it be reverse engineered to reveal such data.
- the system always makes the initial assessment of whether a stream may fall under a Rights Holder's control. If the stream's metadata (e.g. time, location, content of the video) suggests that the stream may be one to protect, only at this point may a human moderator acting on behalf of the Rights Holder get involved .
- the stream's metadata e.g. time, location, content of the video
- moderation may, in some cases, be accomplished by entirely automated processes.
- the watermarking technology embeds the WID, which does not change throughout the entire span of a continuous stream of content together with a timecode.
- the TC value begins at zero at the start of the stream and measures the offset into the stream at which each successive watermark is embedded. This is illustrated in Figure 8.
- a Rights Holder determines that a stream comprises or contains content that they own then a record is kept to indicate this.
- This record includes at least the event to which the Rights Holder claims the content belongs, the stream in question, its watermark identifier and, optionally, any start and end times within the stream demarcating the section of content to which the Rights Holder claims ownership. This is illustrated in Figure 10.
- the rights holder can demonstrate that the clip was claimed by them as it passed through the describe content delivery system, may reasonably exert ownership rights and begin the takedown process. It may also be used to identify the user and source device if this is useful in determining the origin of the unauthorised distribution - it may not always be. This is illustrated in Figure 11.
- Figure 12 shows the capture device architecture needed to apply watermarks to content.
- watermarking is a type of metadata, which may be used for example to provide control level information.
- FIG. 13 With reference to Figure 13 there is illustrated an exemplary implementation of part of the arrangement of Figure 1.
- the server 2 of Figure 1 and one of the capture devices 12a of Figure 1 are shown in Figure 4. Additionally shown is a device 30.
- the capture device 12a is configured to generate the metadata for a data stream which it captures.
- the capture device 12a is able to communicate with the device 30, if necessary, to obtain additional metadata associated with the captured data stream.
- the capture device 12a is then able to generate the data 34a, associated with the for example a captured data stream, on signal line 34a, and its associated metadata on signal line 34b.
- the data and the associated metadata on lines 34a and 36b are received at the server 2 as denoted in Figure 2.
- the server 2 does not have to generate the metadata for the data, and this task is carried out by the capture device which then simply transmits the metadata to the server.
- the metadata and the data are transmitted in two separate streams.
- the metadata can be combined with the data from transmission in a single stream, or there could be multiple data streams and or multiple meat data streams.
- Synchronisation can be achieved in some cases by looking at two or more streams, identifying a common audio or visual feature and aligning on those.
- the placement of these AV features could be part of the metadata (e.g. Player 10 joins the game at 01:13:42 on Stream A; 00:15:22 on Stream B; 00:40:11 on Stream C.
- two or more media assets with different timelines may have those timelines aligned by comparing their metadata timelines.
- assets A and B have a common metadata timeline item
- B and C have a different common metadata timeline item
- all three may be aligned even if A and C have no such item in common.
- a remote viewer receiving the streams from the two phones, could very easily align them so that they were synchronised simply by observing the times shown on the clock and by watching for the moment the seconds value ticked over.
- the goal of synchronisation is not to align a set of capture devices with a remote clock - that's one method by which things can be synchronised, but it isn't the objective.
- the goal is to play out each captured stream so that as each frame plays out, it is shown alongside frames from other streams captured at the same moment.
- the digital clock was the marker, but in a real-world situation, a marker might consist of a TV image caught in the background, the position of a moving object a camera flash; or a sound such as the start of a song, a round of applause, the text a speaker is reading, a starting pistol; or an electronic beacon like a Bluetooth ping.
- Each capture device being situated in the same time domain as the marker can safely assume that they heard/saw the event at the same time. Having identified and described the marker, they can compare the state of their clocks at this moment and calculate the matrix of their relative offsets.
- This matrix need only be evaluated once for any unique selection of capture devices. It would only need to be done again if a new device joined the event.
- Sound markers may be picked out using frequency spectrum analysis with the profile of interesting events used to compare markers between devices.
- Bluetooth pings offer a unique possibility in that they can be produced by a device at will rather than having to wait for a marker to occur naturally. Moreover, particularly with Bluetooth Low Energy, timings for data in transit are well known.
- Sound is the least reliable marker source when used in large open spaces as the speed of sound becomes a significant overhead.
- Text recognition within video data could be used to look for digital clocks appearing in frame.
- a set of devices can be set to form a synchronisation group if they share access to a common marker, but that some members of the set may also share markers with other groups and in this way synchronisation between groups/cells may be achieved.
- Figure 1 allows consumers to broadcast live video captured on their mobile handsets (phones, tablets and so on) . Often these broadcast sessions are short, informal but topical, personally meaningful or entertaining and, as such, they are shared with friends and followers - usually automatically - on social media. However, while this makes it easy for a consumer' s connections to watch along in real time, it does little to allow these moments to propagate any further.
- a typical set of use cases for live streaming includes public events where a large number of people are gathered together - sports, music concerts, school/university events, parades etc...
- the first issue to address in seeking to provide such a feature set is being able to identify clusters of user generated content as comprising an event, and thus place data streams into groups.
- the example creates a system that can pick out events from a collection of data streams using only the data available from these streams and the devices capturing them.
- An event can be said to have both geographic and temporal boundaries.
- the system may identify a dozen streams coming from a given location, if they are not tightly bound to a specific time period then this collection of content cannot indicate the occurrence of an event.
- a dozen streams for example, from a certain locale within a short time period might imply the existence of an event.
- the important point here is that the identification of a meaningful cluster of contributions is based not just on geographical bunching but also by time.
- DBSCAN three-dimensional DBSCAN (where the three dimensions used are latitude, longitude and time) . This allows for non-uniform, asymmetric clusters to be identified and is therefore better suited to identifying likely groups of people. Using time as the third dimension is acceptable as it is a continuous, rather than a discrete, quantity.
- independent users independent content creators
- a cluster here might resemble a horseshoe.
- a festival where attendees are concentrated in pockets around different stages or performance areas presents a different pattern of user densities.
- a linear race that does not lap would comprise a locus of connected areas of similar density. Such a scan allows these individual events to be picked out from the surrounding noise.
- the system can identify the likely contributors, and then create a macro-event to collate the individual streams into a group so that a searching viewer can pick up on the full set of clips.
- Viewers or rights holders searching for content can subscribe to an identified event and watch live streams as they become identified as belonging to it, by accessing the appropriate group. In this way, the event is not constrained by the end of any one particular contributor' s stream, and the viewer can choose to watch any such stream or not. Streams may also overlap.
- Streams belonging to the event, identified sooner, means that the customers gets to start sselling their event to viewers quicker. Early streams that might have been missed or excluded from the event are now more likely to get correctly identified and tagged meaning that a customer's 'event pages' carry a more complete picture of the event.
- Content that should be featured in an event owner' s content pages will not be overlooked. More viewers for content are obtained when it becomes included within an event owner's pages. More chances of kudos at being featured by the event owner. Fewer missed moments from an event.
- a further technique to determine whether a particular contributor' s stream belongs to an event or not may involve the use of a GPS device, gyroscope device, and an accelerometer device of contributing users' devices.
- the consumer device While capturing video, location and location data, the consumer device should ideally also record the direction and scope of the capture device's field of vision, preferably no less frequently than once per second. This additional information can be added to the metadata of the data stream.
- the Festival environment 112 comprises an upper stage Stage A denoted 110a, a left stage Stage B denoted 110b, a lower stage Stage C denoted 110c, and a right stage Stage D denoted HOd.
- Each capture device 114a to 114i has a corresponding point-of- view or field of vision denoted by reference numeral 116a to 116i respectively.
- each contributor Being able to further describe each contributor in this way allows the contributed video, or portion thereof, to be attributed to the correct sub-event within the event .
- This audio-visual analysis may be valuable in two ways. Firstly, to provide a running set of keywords or tags that describe the content and which in turn aids both discoverability and the system' s ability to include or exclude a stream, or portion thereof, in an event. Secondly, if a rights holder has a declared interest in content arising from an event, knowing what is actually being captured allows for a more accurate assessment of whether a particular stream infringes.
- some users may turn away from the action during recording, or may be tracking a particular car. Knowing what happens in the video, the system can pick out just those contributors who are looking at the race or even at particular participants in the race.
- a rights holder is not encumbered with the task of reviewing every live stream that pops up in the location at a given time and checking it for infringement. While the techniques described so far would slim this list down, there is still the potential for a large number of streams to check.
- a way for them to be able to describe the parameters of this event beforehand is provided. This includes the ability to define a region within which the event will take place as well as time limits for its start and end. However it also includes the ability to provide a list of tags that can be used to match against content. By defining collections of tags that content must, may, ought not and must not contain in order to be included in the rights holders event, it is possible for much of the review process to be automated.
- a data stream is received at the server 4.
- the metadata for the data stream is also received at the server 2.
- the metadata may be received with the data stream from the capture device, or may be generated in the server.
- a step 124 geographical data of the capture device is retrieved from the metadata, and in a step 126 temporal data of the data stream is retrieved from the metadata.
- An event may be identified when a plurality of data streams are determined to be located within a particular proximity of each other, and to have been generated within the same timeframe.
- the data streams are grouped in a step 140, in accordance with their metadata .
- step 130 the event is created and the appropriate data streams are allocated to that event, effectively being grouped according to the event .
- a step 132 any content which has previously been stored and which is also associated with the event is retrieved .
- step 134 point-of-view information or field view information is retrieved for the data streams from the associated metadata.
- step 136 metadata related to audio-video analysis of the data streams is retrieved.
- a step 138 the streams within the event are then grouped according to this additional metadata, to present groups within the streams.
- this exemplary metadata 140 includes: capture device location (GPS data) 140a; captured stream start time 140b; captured stream end time 140c; capture device gyroscope data 140d; capture device accelerometer data 140e; captured stream audio analysis 140f; captured stream video analysis 140g; and captured stream watermark 140h. Additional information may also be included within the metadata according to the implementation.
- the metadata associated with the content data may include a video watermark.
- the interface 100 receives the video stream which has been transmitted by a capture device.
- the interface 100 provides the video stream (including the watermark) to a watermark identification module 102, and to a
- the watermark identification module 102 retrieves the watermark from the video stream including the watermark, and provides this retrieved watermark as an input to the comparator 104.
- a memory 108 also provides the stored watermark, which includes a store of the watermark equivalent to the watermark store 84.
- the comparator 104 thus compares the received watermark from the video stream with an expected watermark from the memory 108. If a match is indicated, then the comparator 104 provides an appropriate signal to the buffer/filter 106, with the buffer/filter 106 outputting the received video stream responsive to confirmation from the comparator that a match has been received, and thus the received signal is carrying the appropriate watermark.
- the buffer/filter 106 may provide three outputs, in an example, which indicate that a "rights holder" associated with a video stream accepts the content as being part of their event and approves it, that they accept the content as part of their event but reject it, or that they may consider that it does not belong to their event and forward for further processing.
- incoming video streams at the server 4 may be grouped at a video streaming server.
- the grouping may be provided based on watermark recognition.
- the grouping may also generally be provided based on metadata. The described examples are not contingent on metadata having been generated by a capture device and received at the server 4 from the capture device
- the examples utilises metadata associated with a content data stream, but the metadata may be generated within the server 4 itself, or may be provided by a capture device which also provides the captured content stream.
- watermarks can be added to content at the time of capture, and are a form of metadata of the embedded variety. If each piece of content contains a unique collection of metadata embedded in this way, it becomes possible for a further system to use this metadata to uniquely identify each piece of content.
- a server may be configured to store collections of these segment descriptors.
- the server operator may create collections of segment descriptors for any purpose, but one such example use is for each collection to refer to segments of content that belong to a particular user.
- a further use might be to collect together segments that contain related content.
- a further use might be to collect together segments to which should be applied particular consumption limitations (rights management) .
- a further system may receive content from the internet, or any other source, and check if it contains embedded metadata that uniquely identifies it. If so, this system can determine if the source content ought to have had consumption limitations applied to it and, further, may assess whether those limitations have been breeched .
- this further system may be used to police breeches in consumption limitations.
- this is a technique that allows embedded metadata (a watermark) to be used to determine if a piece of content was categorised as belonging to one or more groups .
- Grouping can be used to enhance the delivery of content, with discovery being content driven. This allows content to go in search of viewers rather than assuming viewers will find it. This is especially important for live content that may not last long, or may be from a new contributor. In these cases, it is essential that content creators begin to see that their work is getting the best possible exposure so that they are encouraged to keep creating.
- the system may maintain a list of characteristics that it thinks each viewer likes. It might do this by using the tags, descriptions, locations, or events a stream belongs to or specific metadata from the moments at which the viewer taps to say they like a stream. By whatever means, the system is able to track each user' s tastes and preferences over time.
- Topics/tags/themes/creators that are particularly interesting to a viewer can be followed. This gives the system an explicit signal to curate future content for the viewer that matches the topic/tag/theme or which comes from a followed creator.
- this approach to discovery allows the customer to reach viewers and bring them to the content in a more meaningful, more personally relevant way.
- a viewer is not simply brought to the event's landing page within the content service but is, instead, brought to a stream that suits their tastes exactly.
- a group of users can create an event into which they all contribute one or more live streams of content. Each member of the group may watch one or more of these streams. Guests, who can watch but not contribute, may also be invited to the event. It can be also initiated from one party (an event holder) requesting users to participate .
- Use cases for this grouping include: corporate communication; panel events; private meetings where participants are not in one location; request for contributors to an event.
- Grouping provides: secure communications; guaranteed quality of service, perhaps an optional SLA that makes the service reliable enough for mission-critical communication (e.g. the Delta use-case).
- grouping For the end user (contributor) , grouping provides reliability; and is easier to use than person-to-person calling, content is added to the event.
- grouping For the end user (viewer) , grouping provides a feel of being present at the event; and access to each contributor's stream.
- Grouping may be adaptive. Adaptive grouping determines who should be invited to share that event as a contributor. In addition to the standard ways, geolocation of devices that indicates that a user is present can initiate a request to share the event. Different ways to define the grouping may be provided, e.g. 1.) the normal clear definition of members; or 2.) an open list whereas users matching with certain criterion (presence, profile, etc.) could define an adaptive list of potential contributors.
- broadcast sessions are short, informal but topical, personally meaningful or entertaining and, as such, they are shared with friends and followers usually automatically - on social media.
- a typical set of use cases for live streaming includes public events where a large number of people are gathered together - sports, music concerts, school/university events, parades etc...
- the problems are: how can a collection of user generated live streams be analysed such that it is possible to identify from amongst them one or more events; and, how can a rights holder describe an event they control to the system such that it is subsequently able to assess identified events as being potentially infringing so that this content can be managed by the rights holder?
- the first issue to address in seeking to provide such a feature set is the challenge of being able to identify clusters of user generated content as comprising an event .
- An event can be said to have both geographic and temporal boundaries. In this way, while the system may identify a dozen streams coming from a given location, if they are not tightly bound to a specific time period then this collection of content cannot indicate the occurrence of an event.
- a cluster here might resemble a horseshoe; alternatively, a festival where attendees are concentrated in pockets around different stages or performance areas presents a different pattern of user densities; similarly, a linear race that does not lap would comprise a locus of connected areas of similar density. Such a scan allows these individual events to be picked out from the surrounding noise.
- the system creates a macro-event to collate the individual streams so that a searching viewer can pick up on the full set of clips.
- this invention is not limited to streams that are live and live only. If the system supports the recording of live streams, then any stream identified as being part of an event can be watched whether currently live or not. Moreover, it would then be possible to recap an entire event using a montage of identified streams. This in itself is a unique possibility .
- Viewers or rights holders searching for content can subscribe to an identified event and watch live streams as they become identified as belonging to it. In this way, the event is not constrained by the end of any one particular contributor' s stream, and the viewer can choose to watch any such stream or not. Streams may also overlap of course.
- a further technique to determine whether a particular contributor' s stream belongs to an event or not involves the use of the GPS, gyroscope and accelerometer of contributing users' devices.
- the consumer device While capturing video and location and location data, the consumer device should also record the direction and scope of the capture device's field of vision, preferably no less frequently than once per second .
- the central mass of festival goers would, using only the DBSCAN technique noted above, be considered a single cluster as shown in Figure 15 which picks out a small number of nearby contributors from the audience. Collating video clips and streams from the entire group though would create a confusing jumble of video taken of all four stages.
- This audio-visual analysis may be valuable in two ways. Firstly, to provide a running set of keywords or tags that describe the content and which in turns aids both discoverability and the system' s ability to include or exclude a stream, or portion thereof, in an event - more on this later. Secondly, where if a rights holder has a declared interest in content arising from an event, knowing what is actually being captured allow for a more accurate assessment of whether a particular stream infringes.
- some users may turn away from the action during recording, or may be tracking a particular car. Knowing what appears in the video, the system can pick out just those contributors who are looking at the race or even at particular participants in the race.
- streams with a lower quality should be shown below those of a higher grade.
- Data streams may be prioritised in general whether or not they are also grouped. Where grouping is used, then prioritising methods may be provided within a group.
- step 142 data streams are received by the server.
- step 144 the reputation scores associated with each capture device providing the data streams are retrieved, and then in a step 146 the data streams are ordered accordingly, based on the reputation scores given.
- a step 148 the matching of the data streams to the groups is determined based on an assessment of metadata, and the data streams then appropriately reordered in step 150.
- a step 152 the quality of the data streams is assessed, and then in step 154 the data streams accordingly reordered.
- Prioritisation may be based on the metadata associated with the content.
- This metadata may include information about the originator, about the intended or actual audience of the content or the subject matter of the content itself.
- the priority assigned to a given piece of content may not be a single value but may be a timeline of values that indicate the priority of the content at any moment throughout its duration.
- the prioritisation technique depends on the prioritisation purpose. For example: a prioritisation scheme configured to prioritise content for encoding might order the content items by popularity in order to encode the most popular items soonest. In another example, a prioritisation scheme may be configured to order content by likelihood that it contains banned material so that questionable items may be moderated quicker .
- Prioritisation operates on a queue of content items being processed/consumed/presented/etc... If that priority changes then the position in the queue changes. So content that had metadata that meant it was important to process soon (i.e. high priority) might change (e.g. the content creator may move the camera to something less interesting), causing its priority to fall.
- streams may be ordered according to the standing or reputation of the creator. Streams contributed by users with a good reputation may be shown towards the end of the list as they likely require least attention from the moderators. Content from new or less reliable contributors deserves closer attention and should be ranked higher.
- streams with a lower quality should be shown below those of a higher grade.
- the metadata is input to a control block 40, and the data is input to a group filter block 44.
- the group filter block 44 receives a control signal on line 42 from the control block 40.
- the control block 40 generates the control signal on line 42 to route the data on line 34 to one of a plurality of groups, denoted in this example by a firs group Gl, a second group G2, and an nth group Gn .
- the control block 40 analyses the metadata for the data, and groups it according to rules.
- the data received on line 34b is output on line 461, 462 or 46n.
- the data may be output on one or more of the lines, and allocated to one or more group, in dependence on the control.
- Each of the data streams is allocated to one of the three groups based on the control signal on line 42.
- a further control signal 48 and having component parts 481, 482, 483, each associated with tone of the groups, is generated.
- Each of the control signals 481, 482, 483 is used in a priority block for each group, denoted by 521, 522, 523 respectively, to apply a priority to each signal within the group.
- signals can be output for each group in dependence on the priority, which is also derived from the metadata for the data.
- a new content is created and received at the server 2.
- the received content comprises data and associated metadata.
- the metadata is transmitted to the server with the data itself, although in alternatives the metadata for the data may be generated within the server 2.
- step 62 the metadata for the content is submitted to a grouping service.
- step 64 the data itself is submitted to a content analysis service.
- step 66 the content analysis service analyses derives further metadata about the content itself. This step may be repeated several times as required.
- step 68 this additional metadata for the content is stored.
- step 70 the metadata for the content is aggregated. That is, the received metadata is aggregated together with the metadata generated by the content analysis service.
- step 72 whenever aggregate data changes, these changes are submitted to a content manager.
- the metadata is used in order to determine the matching up of content to a request, and therefore storing of any changes to aggregate data is important.
- a content matcher compares the new or changed metadata against the list of content requests which are pending.
- step 76 it is determined if any matches are identified. If no matches are identified, then in step 78 new content is awaited.
- step 80 the matches are added to a list of potential matches and recorded against each identified content request. The process then moves to step 78 to await further new content .
- This relates to content finding users, rather than users going looking for content. This provides a solution to a problem of live content failing to find an audience while the content is still live.
- Metadata is used about streams a viewer has watched to build up a profile of what that user may wish to watch in future. Further, metadata generated about a new stream, as it happens, may be used to match it to viewers who may be interested in watching it.
- This feature allows content to go in search of viewers rather than assuming viewers will find it. This may be especially important for live content that may not last long, or may be from a new contributor. In these cases, it may be essential that content creators begin to see that their work is getting the best possible exposure so that they are encouraged to keep creating.
- the system may maintain a list of characteristics that it thinks each viewer likes. It might do this by using the tags, descriptions, locations, or events a stream belongs to or specific metadata from moments at which a viewer taps their screen (e.g.) to say they like a stream. By whatever means, the system is able to track each user's tastes and preferences over time.
- Topics/tags/themes/creators that are particularly interesting to a viewer can be followed. This gives the system an explicit signal to curate future content for the viewer that matches the topic/tag/theme, or which comes from a followed creator.
- Users who have created videos containing particular types of content can be enumerated so that other users can invite the best creators to help them shoot new content.
- users who have contributed to a particular type of event in the past, or who have built up a reputation for quality contributions on a given topic may receive notifications to begin contributing new content to a new event if they are nearby or close to one of an event's points of interest .
- Metadata may be used to group streams in a geographic area, and create a geofence around it. This is an example use-case of grouping but contains potentially further interesting details about how to detect outliers that improve the geographic grouping approach.
- a rights holder may create a geofence for a match and collect all streams originating within it.
- a new stream starts up and part of its metadata indicates that it is of a football match.
- the system may speculatively create a new event with a geofence matching the previous one in order to catch more relevant streams more quickly and therefore provide better coverage of the whole event.
- Past events having been identified, can be used to improve the speed with which future ones are detected. This may not allow the system to work out the type of event (e.g. the difference between a football match on one day and a music concert on another day at the same stadium) , but the ability to mark streams as likely belonging to an event sooner rather than later may ensure that more streams get correctly identified and included sooner .
- the type of event e.g. the difference between a football match on one day and a music concert on another day at the same stadium
- streams belonging to the event For a customer, streams belonging to the event, identified sooner, means that the customer gets to start sselling their event to viewers quicker. Early streams that might otherwise have been missed or excluded from the event will now be more likely to get correctly identified and tagged, meaning that the customer' s event pages - for example their event pages in a content delivery application - carry a more complete picture of the event .
- Such interactivity is possible on a traditional television broadcast but comes with an overhead to manage, time, collate, report, and filter interactive elements.
- an example is to provide voice recognition that allows the system to process the spoken word and pick out key phrases or structure that can be automatically converted into instant interactions. For example, hearing "What do you think of X?" might be converted into a pop-up on a viewer' s screen with the text and a text entry box for submissions.
- an example is to pre-confiure swipe/gestures to insert particular interactions at certain moments.
- an example is to provide image/shape recognition so that whenever the camera sees, e.g., a can of a particular drink in the image, it shows viewers an overlay that they can tap to get a scannable QR code to redeem a can at a discount.
- Use cases for this modification include: TV; level of engagement; donations (see later) ; or PPV.
- a value is added with an option to measure the level of engagement of the viewing audience which can be used to price advertisement spots, shape future creative decisions etc. Deeper analytical information about the audience is made available. Audiences are kept engaged for longer - rich, rewarding interactions will stop the attention of a viewer straying elsewhere .
- a cameraman can spend more time shooting the content and less time curating interactions, and yet still be able to rely on the system to produce meaningful interactions that will generate revenue/intelligence/engagement .
- interaction - particularly with a live event - creates a more intimate experience of the content. Done well, it makes viewers feel more connected to the event and more likely to want to repeat it .
- the metadata created for a stream can be used in an auto-zeitgeist/content summarisation process.
- the method may also be used in combination with creating an event montage.
- An event montage may include stitching video streams together to make a panoramic series of shots, or sequencing parts of contributing streams in order to, for example, follow a particular car/horse in a race; focus on one or more individuals in attendance; pick out clips that are about a particular aspect of the event and so on .
- an event montage is effectively a condensed or summarised version of the whole or multiple captured video streams. It may also offer the additional flexibility of having access to overlapping pieces of captured video streams from which to choose.
- the creation of an event montage may become accessible by generating metadata at the capture device and by providing metadata on a scene level. Further metadata may be generated or added to provide the system with information on multiple captured video streams of the same event and to create different event montages of the same event for the ability of the viewer to choose between alternative camera angles of any given moment.
- the method may add a new dimension for viewing streams that enables the viewers to engage and that gets the viewers interacting with the system more often. It may offer a greater playback flexibility for the viewer who can watch during or after an event and see the whole thing. Viewers can capture an event themselves and in combination with different captured video streams from the same event, the viewer can collect footage he never shot himself but which shows him in attendance at the event .
- the viewer may be able to dig into a moment and see it from every angle no matter who contributed the captured video streams.
- viewers may ask the system to create a montage of streams from the event that tell a coherent story of it - a so-called event montage.
- Rights to content originating from certain public events such as sports matches, tournaments, music concerts and festivals, amongst others, are typically held by broadcasters, publishers and other similar organisations .
- a first class of problem associated with rights holder content can be summarised as: how can the system, when being used by a consumer to capture content, identify this content as potentially infringing on someone's rights? Having identified these streams, they need to be routed to the rights holder for them to decide whether a given stream should be blocked or should form part of their approved and branded user generated content (UGC) .
- URC approved and branded user generated content
- a second class of problem is therefore: how can each new stream be moderated automatically? What parameters might a rights holder wish to specify in order to define a stream as acceptable or not?
- users are not required to use an application associated with the server 2 to capture live content, and they may use a different platform that does not check for potential rights infringements.
- Unapproved streams can be flagged as infringing and a take-down request issued.
- Watermarking a stream would provide a useful means of checking whether it is approved or not. However, such a method would need to be computationally lightweight so that it could be inserted by the handset (e.g. phone) itself, so that there is no requirement for any significant back-end processing to add it.
- a third class of problem is then: how can we identify content originating from outside the system as being potentially infringing? Further, can we create a watermarking system that is lightweight enough to be performed by, e.g., a smart phone, but computationally complex enough that it is difficult to fake so that, by checking for a watermark, potentially infringing external streams can be validated?
- a user might create a stream using the Meerkat or Periscope applications at an event; a user might stream a TV image of an event; a user might clip a video from YouTube or Snapchat; a user might record a moment on Instagram and post to Facebook or add a video on Twitter.
- Each of these outlets - and there are many more - may provide the rights holder with many video clips which they suspect may infringe their rights.
- the problem therefore is: how can a rights holder automatically examine a collection of videos, from whatever sources, and determine which, if any, were taken from or represent approved streams and which do not so that appropriate action can be taken against them?
- Each stream is provided with a hidden watermark embedded within it.
- This watermark encodes information that identifies the device on which it was captured; the owner of the device; and a moment-by-moment reflection of the time at which the content was captured.
- the rights holder may accept that the content is of their event and approve it.
- the rights holder may accept that it is part of their event and reject it.
- the rights holder may indicate that it does not belong to their event at all.
- the watermark details of approved and rejected streams are added to approved/rej ected lists for that event. These lists therefore describe which devices, which users and between which times content was approved or rejected for any given event.
- a system may be constructed which receives suspect videos, of any duration and from whatever source - for example, clips shared on Facebook, streams republished on other platforms, downloadable videos on file sharing sites etc. - such that they may be checked for the presence of a watermark.
- the device, user and timestamp are decoded for each moment and checked against the complete list of event approval and rejection lists for all rights holders.
- Social media posts often carry geolocation data about where an update originated. This can be used by the system to see if the video may have been created (e.g. through Facebook, Twitter, Instagram etc in a location that fell within the spatial and time boundaries of a rights holder's event. If it did, then the suspect video is passed through any automated moderation process defined for that event in order to improve the level of confidence in the assessment.
- the video may have been created (e.g. through Facebook, Twitter, Instagram etc in a location that fell within the spatial and time boundaries of a rights holder's event. If it did, then the suspect video is passed through any automated moderation process defined for that event in order to improve the level of confidence in the assessment.
- a take-down notice may be issued to the publisher since any approved content must have a watermark which appears in the approved list.
- This system can therefore consume any seemingly live stream, a delayed stream, a video-on-demand (VOD) clip, or a VOD asset, analyse it and determine whether it was itself or is part of an approved stream.
- VOD video-on-demand
- a separate problem occurs with users who create streams that cover an entire event. For example: someone who broadcasts his point of view for the whole football match; someone who streams the full set of songs from a band's performance at a concert.
- Contributors falling into this category can start another stream, but they should be locked out of doing so for a period if their location suggests they are still at the event .
- the rights holder may exert some control over how content is captured at their events. This particular example looks at limiting the duration of such a stream, but the rights holder may also request the contributor to switch to a higher bit- rate/quality or similar for example.
- the live streaming applications running on the capture devices may include software that is able to detect the presence of a TV screen in the image.
- This detection can be limited to rectangles with a particular aspect ratio (such as 4:3 and 16:9) so that rectangles similarly shaped to TV screens may be picked out.
- Algorithms can be chosen which are able to detect rectangles irrespective of their orientation with respect to the viewer, so a rectangle seen side on would be detected as readily as one seen face on.
- the capture device transforms the contents so that they are mapped to a proper rectangle thereby eliminating any skewing or other deformation arising from the orientation of the TV to the viewer.
- the result is a rectangle of the given aspect ratio filled with the transformed content from the field of view, as shown in Figure 21.
- These one or more transformed parts of the captured video may then be further analysed to determine if they do indeed contain TV-style video.
- Contents tracked from one captured frame to the next allows a "video stream" for each suspected TV to be constructed from which a video fingerprint can be calculated and submitted to a server.
- the server may use this fingerprint to determine if the rectangle does indeed contain a TV signal and, if so, whether it is acceptable to retransmit this as part of another stream. If not, it may instruct the capture device to blank it out, as shown in Figure 22.
- Viewers seeing this, may be further offered the option to pay for access to the protected part of the video, whereupon they would be taken to an approved source of the content, or offered direct access to free- to-air but approved versions instead.
- This approach depends on having detection software in the capture device itself - such a solution may be beneficial to existing live streaming platforms wishing to proactively prevent their users abusing rights holders content .
- a second approach uses a similar detection process but analyses streams that are already being published.
- Video Challenge Such an approach might be used by a rights holder to monitor content found on the internet and pick out those streams, on-demand assets or clips which feature TV footage of their content.
- Video challenge depends upon grouping streams by similarity .
- a collection of media assets whether live streams, pre-recorded or otherwise, or any part of either, or any combination thereof, grouped in dependence on one or more items of common metadata may be provided to a function by which these assets are rated in comparison to each other.
- a media asset is an item of content.
- Common metadata here is taken to mean there is a non-trivial intersection between the sets of metadata associated with each stream.
- the rating function may be a computer program that evaluates the assets or the assets in combination with their associated metadata in order to determine a relative ranking from which a comparative rating may be derived .
- the rating function may be a system that presents the assets to one or more users for them to provide comparative ratings.
- the system may present all assets at once so the user can provide comparative ratings to each asset while having visibility of all the others.
- the system may present two assets from the collection and prompt the user to choose the asset they would rate lowest. The lowest asset is then replaced, in the user' s display, with another asset from the collection. The user is then prompted to repeat the rating process.
- a viewer may be presented with two streams playing side by side. At any moment, the viewer may choose to dismiss one as less interesting than the other. Its place is then taken by the next live stream in the list.
- this video challenge feature adds a degree of crowd-sourced quality checking to what may be a vast number of streams. This can be used, along with prioritisation techniques described here elsewhere, to sort the list of possible streams and allow the customer to more easily pick out the streams they wish to feature
- a greater understanding of the audience is provided, and the ability to improve the quality of future recommendations is provided.
- content gets exposure to a wider audience with greater opportunities to receive and act on feedback from those viewers. Better exposure increases the incentive to create more content. More timely feedback allows the contributor to truly respond to the desires of their audience and create a genuinely bi-directional experience. This helps build an audience beyond a contributors followers and friends.
- Requests for content might be requests to film specific people or event, but they are not necessarily be limited to requests to film specific people. They could be requests to:
- the "put" might be fulfilled automatically by the system when it sees a stream that happens to match the requirements, or it may be fulfilled explicitly by someone picking up the order and posting a scheduled event that will feature the requested content.
- the problem therefore is: how can we use viewers' search and fulfilment requests to drive content generation throughout the platform as a whole?
- the viewer may describe the content they are looking for using free-text which in turn is broken down semantically, by the system, into a series of key words and phrases.
- Live content is similarly tagged, in real time, to describe what's going on in the video using an automated process that analyses both the video and audio aspects of the material - this process is covered elsewhere .
- Such a request contains at least a list of tags that describe the required content but may also include a message with more detailed requirements; a deadline for content and notes on intended usage.
- the requester may also offer a fee, or request particular sets of reuse rights. Likewise, they may describe, using tags, things which should not appear in the content.
- Requests, or "calls" for content, to borrow a financial trading term are entered into an online catalogue that other users may browse. This catalogue is indexed and categorised based on the tags, fees, rights requested and so on.
- Requests may be fulfilled in one of two ways.
- a contributing user may claim a request by offering to provide the live stream as described.
- the system performs its usual audio-visual (AV) analysis of the content and checks off those tags which should or should not appear in the stream.
- AV audio-visual
- This contributor may also, by arrangement with the requester, submit pre-recorded content in cases where a request cannot be fulfilled at a time that is convenient for the requester to watch live (e.g. x I'm in London but want a video of the Sydney Opera House at daybreak' ) .
- Requesters may accept or reject a particular submission. Rejections mean that the request stays open for another user to attempt to fulfil it, but also that any rights remain with the contributor.
- the second fulfilment method is an automatic one. As other users engage with the platform, the AV analysis may, from time-to-time, match a stream with a requester's needs. These unintentional matches are made available to the requester to review, either as they happen or later.
- the requester believes a stream meets their requirements, they can mark the request as fulfilled or, where a payment or rights are required, the same terms are offered to the contributor who may accept or reject them closing or leaving the request open respectively.
- Requesters may similarly create an open request for content that alerts them any time interesting content becomes available. This feature, while using much of the same logic, delivers more of an active search function that allows content to find viewers in real time.
- requests, or active searches may be matched explicitly by contributors seeking to fulfil the request, or casually by contributed streams that happen to match.
- the viewer may up or down vote the stream based on how well they perceive the content matches the search. These votes affect that contributor' s standing or reputation score as a provider of material categorised in each of the tags present .
- the content delivery technique described herein is able to fulfil those needs.
- requesters may configure active searches/requests for content and wait for user- generated content to be matched.
- a rights holder may offer a bounty to members of the public in exchange for capturing content of a poorly covered part of their event using live statistics.
- a paywall provides for automated pricing based on metadata.
- An extended ability to generate and process metadata at a capture device may also be used to offer more opportunities to monetise captured video streams.
- the described method and system may address that gap, by the ability to generate and provide metadata also in consideration of the monetisation process that the system provider intends to accomplish.
- System providers or publishers of the content wishing to monetize the viewing streams may configure open-access periods during their broadcasts (e.g. the first 2 minutes are free to all; certain subsequent portions are open; etc%), while setting premiums on the remainder. Viewers may gain access to the content on a pay-per-view basis or via a subscriptions. Metadata that allows for the distinction can be used to control the access .
- a monetization process may also allow a subscription to a particular content creator's output, or to a cross- contributor selection organised by theme. For example: a subscription to x cat videos' would provide access to any premium cat video output from any participating contributors amongst whom proceeds would be distributed based on viewing figures. Metadata that are generated at the capture device may be used to support the organization into such themes as well as to consider contributors when allowing access to viewing streams.
- a subscription method that involves a matching of criteria to which viewers have access with metadata of captured video streams may support a dedicated offering mechanism, and with such a method contributors may control to whom their premium content is offered. Beyond traditional restriction options such as location, device-type and age, contributors may consequently also limit access of the captured video streams by user group.
- a university offers a range of online courses where participants may watch lectures live using the described system.
- the university does not wish to encourage full-time students to skip attendance and so limits the availability accordingly.
- Contributors may control to whom their premium content is offered. Beyond traditional restriction options such as location, device-type and age, contributors may also limit access by user-group. For example, a university offers a range of online courses where participants may watch lectures live using the described content delivery system. The university does not wish to encourage full-time students to skip attendance and so limits the availability accordingly.
- contributors may offer their content for sale directly thereby operating a market for requests for content and submissions against the same.
- Contributors may set thresholds for the sale of their content to broadcasters who may create automatic bidding rules for particular categories of content (akin to the ad market Google operates with AdWords)
- Donations allow a publisher to solicit donations from viewers as an alternative or supplemental revenue stream.
- Donation buttons that launch a simple payment process can be placed over the live stream throughout an event or, by using a director/ or pro-publisher application, placed on screen manually at key moments or even automatically in response to certain phrases being said by the speaker.
- Live statistics uses metadata that describes what happens in a section of content to create performance metrics.
- the system and described method may also be used to increase the quality and the usage of statistics of content consumption.
- Metadata that is generated at the capture device may provide considerable information on the captured video stream without increasing the load at the server side to an unaffordable level.
- Metadata on scene level may be used to infer statistics about how the viewers of the viewing streams engaged with the video stream far beyond pure information, e.g. about how many viewers consumed the stream or for how long had a viewer watched a viewing stream.
- An aspect of the method may include a matching of the period and corresponding time stamp to which a viewer watched a viewing stream with the associated metadata within the time stamp of the video.
- the method may include the generation of metadata which allows for separating the captured video stream into chapters of different topics.
- metadata allows for separating the captured video stream into chapters of different topics.
- the generation of metadata for a captured video stream recoding a public political debate can be used to find and provide headlines of topics of what the debate is about. Consequently the metadata of the captured video stream representing a clear correlation between time stamp and topic may be compared to statistics of a viewer' s consumption behavior. Further interpretation of what viewers find interesting - and what not - can be accomplished.
- This may also be used in an aspect in which the live streams are captured and metadata is generated in parallel to the stream.
- Real-time analysis of the viewer' s consumption behavior in consideration of the generated metadata during a speech may be accomplished to provide real-time feedback of the viewer' s behavior to the speaker with regards to what the speaker talked about .
- Statistics are applicable to many questions and aspects of a video event and in particular to live events.
- real-time analytics or real time stats are discussed, meaning when particular selected measures are shown to, e.g., a speaker in real-time during a speech.
- Such information can be displayed clearly and easily. It can indicate for instance at a start of an event that viewers slowly log into the event and start looking. There might be also some typical variation on the number of viewers due to a certain percentage that just enters a web page but leave it again after some seconds .
- Such an organisation may wish to see, in real time, how their event is being covered by members of the public.
- a race track (horses, cars, etc..) or a golf course.
- Action may be concentrated in particular areas but there may, at times, be places where something unexpected is developing but which is not being covered.
- Rights holders in these cases, may decide to offer a "bounty" to contributors who help bring in additional coverage of a part of their event.
- Real-time analytics presented in a simple way that seek to answer well defined questions (such as the level of coverage an event currently enjoys across its extent) would be of value to rights holders and event managers alike .
- the description of the real-time statistics shows that a solution for that idea needs in a first instance real-time data.
- the number of viewers and particulars of current viewers is one good example.
- the speaker should be able to input his current interest i.e. his question that he wants to get feedback during the speech. That means in a standard solution a pull-down menu of accessible real-time feedback could be used to let the speaker decide what to get. In the discussed example "the real-time variation of the audience".
- a question could be also selected by a matrix of questions and subcategories for which data points are accessible.
- it would need, for instance, the age of the viewer gathered from profiles of the subscribed audience.
- a subcategory would be then "with the age of 60 and above”.
- the smiley' s facial expression indicates a third information which is gathered by an additional data beacon.
- the real point is that in dependence of the accessible data points, there are some predefined information usable to give relevant feedback to a speaker during his speech, but also with more data points multiple combinations come up that could help to answer more sophisticated questions in real-time.
- a capture device 12a is connected to a voice recognition module 428, an edit module 430, and a so-called director 432.
- a set of captured images may be provided by the capture device 12a to the voice recognition module 428, in order to edit the video stream associated with the captured images.
- a video stream may then be transmitted, for example to the video streaming server, which has been appropriately edited.
- a capture device 12a is connected to an interface to cloud services 438, an edit module 442, and a director module 444.
- Cloud translation services are provided as denoted by reference numeral 440.
- the captured images are sent to cloud services for further editing, before being returned to the capture device, and then the capture device providing a video stream which is appropriately adapted .
- the capture device may provide multiple video streams which have been augmented in different ways, in combination with the raw video stream.
- FIG. 28 there is illustrated an arrangement in which incoming video streams are grouped at a video streaming server in accordance with the recognition of fingerprints contained within the video streams .
- a capture device 358 includes a capture audio sample module 362, a video content module 360, a wireless transmission module 366, and a clock 364.
- a server contains first and second buffers 378a and 378b, first and second fingerprint recognition modules 380a and 380b, first and second caches 382a and 382b, and a fingerprint grouping module 382.
- the capture device and the server are connected via a network 372, the network receiving a stream on line 368 from the capture deice 358, and generating a stream A and a steam B on lines 374 and 376 to the server.
- the network also receives further streams as represented by line 370.
- a server is configured to include a determine location module 400, a determine time module 402, a determine POV module 404, a determine other tags/characteristics module 406, an allocate stream to group module 408, a stream multiplexing module 412, and moderator modules 414 and 416.
- incoming video streams may be grouped, for example, in dependence upon the time of the content of the video stream, the location of the capture device associated with the video stream, the point of view of the capture device of the video stream, or other tags/characteristics of the video stream associated with the device, or other metadata associated with a video stream.
- a grouped video stream may be provided for viewing, or may be provided to a moderator or other element for further processing.
- the video streaming server may receive a control signal in order to define a group, and the control signal may be received from a viewing device or from an event organiser .
- FIG. 30 there is illustrated an interface 430, a memory 432, a comparator 434, an address module 436, and an interface 438.
- Figure 30 illustrates an example in which a viewing device provides the content server with metadata which is stored in a memory of the content server, which metadata illustrates tags or characteristics which the viewing device is interested in receiving video streams for, if the video streams are associated with those tags or characteristics.
- a receive video stream is compared with the metadata stored in the memory, and if a match is identified then an address module attaches the appropriate address to the video stream, and provides it to an interface for delivery to the viewing device.
- FIG. 31 in which there is illustrated an interface 440, a memory 444, a comparator 442, an event allocate module 446, and an interface 448.
- Figure 31 illustrates a similar example, in which the metadata history associated with viewing devices is stored in a memory.
- a viewing device may not have indicated to the content server that they particularly want to receive video streams at this instant.
- the content server may compare the metadata of the incoming video stream with metadata associated with historical requests from viewing devices, and accordingly allocate the video stream to a viewing device, or to an event, in dependence on the comparison.
- a module to determine a reputation of a contributor 524 a module to determine quality of a stream 526, a module to determine compatibility of a search 528, a rank module 536, an edit module 538, a moderator module 542, a review module 544, a recommendation module 546, and a selection module 548.
- an incoming video stream 522 may be processed by a streaming server in order to rank the video stream, and then allocated to one of various modules to provide streams 550, 560, 580, 582.
- the ranking may be based upon the reputation of a contributor associated with a capture device, the determined quality of the video stream, or the determined compatibility of the video stream to search parameters provided by a search criteria.
- the incoming video streams are ranked, optionally edited, before being sent out to a viewing device, optionally via a moderator, a review process, a recommendation process or a selection process .
- All the examples and embodiments described herein may be implemented as processed in software.
- the processes (or methods) may be provided as executable code which, when run on a device having computer capability, implements a process or method as described.
- the execute code may be stored on a computer device, or may be stored on a memory and may be connected to or downloaded to a computer device.
Landscapes
- Engineering & Computer Science (AREA)
- Signal Processing (AREA)
- Multimedia (AREA)
- Databases & Information Systems (AREA)
- Computer Security & Cryptography (AREA)
- Computer Networks & Wireless Communication (AREA)
- Library & Information Science (AREA)
- Software Systems (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Astronomy & Astrophysics (AREA)
- Human Computer Interaction (AREA)
- Computer Graphics (AREA)
- Information Transfer Between Computers (AREA)
- Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
- Television Signal Processing For Recording (AREA)
Abstract
There is disclosed a system for providing streaming services, comprising: a plurality of capture devices, each for capturing data and providing a captured data stream; and a server, for receiving the plurality of captured data streams; wherein each capture device is configured to generate metadata for the captured data, and transmit said metadata to the server.
Description
MEDIA STREAMING
BACKGROUND TO THE INVENTION: Field of the Invention
The present invention relates to the provision and/or the use of metadata in conjunction with associated content . Description of the Related Art:
It is known to provide data (which may also be referred to as content data) and metadata. The metadata is data describing or defining attributes of the data.
In a system in which a central server provides services for data received from multiple sources, it is typically required that the metadata for the data is generated at the server. This potentially requires the server to perform processing for multiple sources of data and is processor intensive for the server.
It is a first aim to reduce the processing required to be carried out in such a scenario by a centralised server .
It is known to provide content streamed from devices which capture data to other device which consume or view data.
A system in which a central server receives data streamed from many devices and makes multiple streams available for consumption by other devices is an example of a scenario where the identification and viewing of relevant content for the other device can be difficult.
In such an example system in which a central server provides captured content to be viewed, if a large volume
of captured content is provided it may be difficult to usefully present this, for example for viewing.
It is a second aim to usefully present content.
In an example system in which a central server provides captured content to be viewed, it may be difficult to distinguish between content - even if it meets particular criteria - particularly if the objective is to provide a live feed. This is particularly so if the live feed is to be edited or moderated.
It is a third aim of the invention to usefully order content .
Further, in such an example system, where the central server receives multiple data streams it can be difficult for any editing or moderation of the content of received streams to be applied, in order for these streams to then be viewed live by other devices.
It is desirable to allow for moderation or editing of captured data streams to allow them to be viewed efficiently as live streams.
SUMMARY OF THE INVENTION
An example implementation of a system with a central server concerns the capture of images and the generation of associated video streams in a video streaming environment comprising a plurality of capture devices (such as mobile phones) connected to a streaming server via one or more networks such as the Internet. In dependence on the received captured video streams the streaming server may generate one or more further video streams for viewing by one or more viewing devices.
In a first aspect there is provided a system for providing streaming services, comprising: a plurality of
capture devices, each for capturing data and providing a captured data stream; and a server, for receiving the plurality of captured data streams; wherein each capture device is configured to generate metadata for the captured data, and transmit said metadata to the server.
The metadata may be transmitted to the server in the captured data stream.
The metadata may be transmitted to the server on one or more metadata streams additional to the captured data stream.
The metadata may be synchronised with the associated captured data for transmission from the capture device.
The metadata and the captured data may be associated with a common time line, wherein the server determines the time line and synchronises the captured data and the metadata based on the time line. The server may determine the timeline from the captured data stream. The server may determine the time line from the one or more metadata streams .
The capture devices may communicate with a further device or server in order to obtain additional data for the captured data stream. The capture device may communicate with a further device or server to transcribe the captured data stream. The further device or server may provide a geo-location for the capture device.
The system may further comprise a viewing device, the server outputting a viewing stream to the viewing device. One or more captured data streams may be routed to the viewing device in dependence on the metadata associated with the captured data streams. The viewing device may be associated with a moderator of content or an editor of content.
In this aspect there may be provided corresponding methods and processes.
Video streams may be grouped which have a common location or a common time or matching or overlapping time lines. The location and time may be provided as meta-data at the capture device. Other tags can be added as meta¬ data, e.g. tags identifying somebody that has requested to be filmed so any stream identifying them can be grouped .
The existing video stream may be augmented or one or more additional augmented video streams may be created. The augmented video stream, or the video stream and the one or more augmented video streams may be transmitted to the server. Where a video stream is augmented, and an additional augmented video stream is created, the metadata associated with the video stream is also modified, to reflect the augmentation.
The augmentation of the video stream may be achieved by transmitting the video stream to a processing server which processes the video stream. The processing may comprise speech-to-text conversion, and may comprise providing the video stream to a speech-to-text recognition module. After such processing, the processing server transmits the results of its processing back to the capture device. An augmented video stream is then available from the capture device, and the metadata for the video stream is suitably modified to reflect the speech-to-text conversion.
The metadata derived from the stream is sent back to the capture device.
The processing may comprise providing video or audio of the video stream to a cloud-based speech-to-text recognition service.
An augmented video stream may be a video stream with subtitles.
The metadata that is generated for the augmented video stream may be transmitted from the capture device to the streaming server together with the original video stream. The metadata may be provided in a metadata stream separate to the video stream.
The method may comprise providing the data stream and/or the augmented video stream based on the metadata to one or more viewing devices. The video stream and the augmented video stream may be provided to different viewing devices.
The augmented video stream may comprise the video stream with additional data. The additional data may be meta-data. The meta-data may be used for grouping the video stream. The meta-data may be used for recommendations/ invitations/ searching/ searching within content .
The creation of metadata associated with the additional or augmented content helps improve the discoverability of the content at a server.
In a second aspect there is provided a system for providing streaming services, comprising: a plurality of capture devices, each for capturing data and providing a captured data stream; and a server, for receiving the plurality of captured data streams and outputting at least one output stream, wherein the server is configured to dynamically group the captured data streams in
dependence on metadata associated with the captured data streams .
The metadata for a captured data stream may be received from the associated capture device.
The metadata for a captured data stream may be determined at the server. The server may be configured to dynamically group the streams in dependence on a current definition of a group. The server may be configured to dynamically group the streams in dependence on the current metadata of the data streams. The captured data streams allocated to a group may be output to an editing device. The editing device may control the output data stream from the server for the group.
The captured data streams may be prioritised in dependence on the metadata associated with the captured data streams of the group. The captured data streams within each group may be prioritised.
Control data to be applied to a captured data stream may be provided from an external source. The control data may comprise a set of rules. The set of rules may define a group. The set of rules may be used to allocate data to a group.
In this aspect there may be provided corresponding methods and processes.
The identification of the groups may be dynamic.
This dynamic nature of the groups may be manifested in two ways. Firstly, data streams may be allocated to groups on a dynamic basis in dependence on a current definition of a group. If a definition of a group change, then the allocation of current data streams to that group may change. The definition of a group may change as rules defining a group may change. Secondly, the allocation of
a data stream to a group may dynamically change. This may be because the metadata associated with the data stream indicates that the data stream should no longer be allocated to a particular group, for example.
The dynamic behaviour may or may not be applicable to all the aspects and embodiments herein.
In this aspect there may be provided a method comprising: receiving a plurality of video streams from a plurality of capture devices; and grouping two or more video streams. The grouping of video streams may be dependent upon meta-data associated with a video stream. The video streams may be received from a plurality of independent capture devices.
The method may comprise identifying a location associated with the capture device providing the received video stream based on the location provided in the metadata, and grouping video streams according to location. The method may comprise identifying a time associated with the video stream, and grouping video streams associated with a predetermined time. The method may comprise identifying a direction-of-view of a capture device providing a video stream, and grouping video streams in dependence on the point of view.
The method may comprise grouping video streams in dependence on common characteristics of video streams, which may be the metadata of the streams. The method may comprise receiving one or more characteristic, and grouping video streams associated with that characteristic. The characteristic may be received from a user. The characteristic may be received from a user of a capture device. The characteristic may be received from a user associated with a device which is being used to view
a video stream. The characteristics may be received from the respective devices, i.e. an automated process without any active input from the users.
Grouped video streams may be provided to viewing devices associated with the group.
Where the method involves capturing the direction of view, there may be associated a compass bearing. The compass bearing may be a compass bearing captured from a compass application or function of the capture device. A direction of view throughout the stream may be recorded as time live metadata which is associated with the stream. In this way, the direction of view at any moment can be identified.
The method may involve identifying a user who has requested that he be filmed so that any stream identifying him is grouped.
Video streams may be grouped which have a common location or a common time or matching time.
In general, grouping identifies a cluster of streams so they can be grouped and readily noticed. The streams can be grouped in dependence on the content and/or the meta-data associated with them.
There may be provided an adaptation in which previous clusters are associated with geo-locations so that events can more easily be spotted in group streams.
The method may comprise grouping two or more video streams in dependence on determined metadata. The video streams grouped in dependence on metadata may be addressed to viewing devices which have identified the associated metadata in a configuration step. The metadata may define a tag, description, location, or event.
Watermarking may be utilised with grouping. A record
may be kept of watermarks belonging to streams which have been allocated to particular groups. This allows a future stream (or clip of a stream) to be checked to see if it was previously allocated to a group. The watermark is not a determining factor in how a stream is allocated to a group. It is a method to check if a stream was allocated to a group and if so, which group.
There may thus be provided a method of: receiving a portion (or clip) of a video stream from a capture device; determining a video watermark in the portion of the video stream; and determining if the portion of the video stream is associated with a video stream previously allocated to a group in dependence on the determined watermark for that stream. This aspect may be referred to as video water-marking.
The filtering may accept or reject a video stream in dependence on the determined video watermark.
In dependence on a determined video watermark, a video stream may be routed for further processing. The further processing may be associated with a content rights holder. A determined video watermark may identify an association with a rights holder.
A video stream may be identified as being associated with a captured device in dependence on the presence of the video watermark.
The method may be implemented in a server which may be adapted to receive captured data streams from registered capture devices, such that a video stream provided by the server for viewing is an authorised video stream. The streaming server preferably selects a number of streams which it provides to the viewing device. Any video stream viewed and including the video-watermark may
be recognised as having been generated by a particular server .
In a third aspect there is provided a system for providing streaming services, comprising: a plurality of capture devices, each for capturing data and providing a captured data stream; and a server, for receiving the plurality of captured data streams and outputting at least one output stream, wherein the server is configured to dynamically prioritise the captured data streams in dependence on metadata associated with the captured data streams .
The server may be configured to dynamically prioritise the streams in dependence on a current definition for prioritisation.
The server may be configured to dynamically prioritise a data stream in dependence on the current metadata of that data stream.
The captured data streams may be grouped, and then within each group the data streams are prioritised in dependence on the metadata associated with the captured data streams.
The priority of a captured data stream may be used to determine the output of that captured data stream from the server.
The metadata for a captured data stream may additionally include a prioritisation score. The prioritisation score for a captured data stream may be dynamic. The prioritisation score may be based on a reputation of a user associated with the capture device. The metadata for a captured data stream may additionally include feedback data from a device receiving the output stream from the server.
The device having made a request for content from the server, the prioritisation score may be indicative of the relevance of the captured data stream to that request .
The feedback may be used to adjust a prioritisation score of the captured device data stream. The prioritisation score may be a viewer rating.
The metadata for a captured data stream may additionally include feedforward data based on the capture device from which the captured data stream is provided. The feedforward data may be used to adjust a prioritisation score of the captured data stream. The prioritisation score may be a capture device rating.
The captured data streams may be edited in dependence on their priority.
The server may edit captured data stream based on a set of rules. The set of rules may apply to a group.
The set of rules may be used to allocate a data stream to a group in dependence on the metadata of the data stream.
The server may additionally edit captured data streams in dependence on a received control signal.
The metadata for a captured data stream may be received from the associated capture device. The metadata for a captured data stream may be determined at the server .
The server may edit the captured data stream by applying an overlay to the captured data stream.
The server may group the captured data streams in dependence on the metadata associated with the captured data streams. The server may edit the captured data stream by applying an overlay to all captured data
streams allocated to a given group. The applied overlay may indicate a branding. The applied overlay may indicate a rights assignment of the content of the captured data stream.
In this aspect there may be provided corresponding methods and processes.
Any method or process may be implemented in software. Any method or process may be provided as a computer software code which, when executed on a computer, performs the associated method or process.
This aspect provides, in an example, a method comprising: receiving a plurality of captured video streams from a plurality of capture devices; extracting metadata associated with the received video streams; analysing the extracted metadata; and applying a priority to each captured data stream in dependence on the analysis of the metadata.
The identification of the priorities may be dynamic. This dynamic nature of the prioritisation may be manifested in two ways. Firstly, data streams may be allocated to priorities on a dynamic basis in dependence on a current definition of a priority. If a definition of a priority changes, then the allocation of current data streams to that priority may change. The definition of a priority may change as rules defining a priority may change. Secondly, the allocation of a data stream to a priority may dynamically change. This may be because the metadata associated with the data stream indicates that the data stream should no longer be allocated to a particular priority, for example.
The allocation of priority to streams may be used to provide an ordered list of streams, with those of the
highest priority at the top of that ordered list and those of the lowest priority at the bottom of that ordered list. The ordered list may then be used for further processing of the streams, with the highest priority streams being processed first. If only a certain number of streams can be processed, then the highest priority streams are processed.
The method may further comprise a step of moderating the video streams in dependence on the prioritising step. The method may further comprise a step of recommending the video streams in dependence on the prioritising step. Rules for moderating or recommending may be set by an event manager. The highest priority streams may be moderated or recommended first, and streams generally processed in accordance with the ordered list.
Where the method comprises a step of moderating, the moderating may comprise an automated step. All streams may be applied to an auto moderator. Some streams may be allowed through because they comply with rules which have been set for the event. Other streams which do not comply with the rules may not be allowed through, or may be forwarded to a manual moderator. The prioritising is used to prioritise the streams (which may be a group of streams) for delivery to the moderator.
The method may include identifying the source of the video stream. The method may include the reputation of the source as a criterion. The method may include the reputation of the source in a particular category - e.g. person A has a good reputation for filming outdoor content, but a poor reputation for filing indoor content. If the content person A is currently providing is indoor, then their reputation as measured by this tag is used.
The method includes identifying the quality of the video stream, and including the quality of the source as a criterion.
The method includes identifying the compatibility of the video stream with a search criterion, and including the compatibility as a criterion. A search criterion may be provided by a viewing device. The compatibility with the search criterion may be used to assign a priority to a stream.
The priority may be used in one or all steps of: moderating the content; reviewing the content; recommending the content; or selecting the content.
All parts of grouping and all parts of prioritisation may be implemented separately or in combination, with any individual art of grouping being combined with any individual part of prioritisation.
The second and third aspects relating to grouping and prioritisation may also provide a method comprising: receiving at least one captured video stream from a capture device; editing the video stream; and providing the edited video stream. This editing may also be considered as filtering streams.
Editing the video stream may comprise overlaying content to the video stream. The overlaid content may comprise adding a link or content to the video stream.
The editing may comprise recognising an image in the capture video stream, and editing the captured video stream to remove that image. This step may be performed at a video streaming sever.
There may be provided pre-recorded content, the editing comprising combining the captured video stream
and the pre-recorded content. This step may be performed at a video streaming sever.
The editing may comprise manipulation of the captured data stream.
There may be provided interactive content, the editing comprising combining the captured video stream and the interactive content, without altering the video data. The interactive content may be a donation link.
The editing may comprise manipulation of the captured data stream to include interactive content. The interactive content may be a donation link.
The term editing refers to manipulation of the video stream. The term does not refer to altering the actual content, i.e. the video data, within the video stream, but to in some way adjusting the video stream presented. Where overlays are applied, they are not applied like special effects. Rather the viewing devices receives the unaltered video together with a set of time coded instructions that described the overlay content. These instructions may be added as metadata to the stream, but may also be provided as a separate resource. Thus the term xedit' is used in a general sense.
The method may further comprise identifying a plurality of video streams as being associated with each other; retrieving a token from one of the video streams, and editing the video stream by providing only the video stream having the token. The method may comprise receiving a request for the token from a capture device, and transmitting the token to the capture device. The method may comprise sorting the identity of the capture device to which the token has been transmitted. A
plurality of video streams are thus filtered in dependence on a token.
The method may comprise establishing a session for the provision of the stream, wherein the session is associated with a predetermined time, wherein the step of editing comprises terminating the session when the predetermined time elapses. The method may further comprise inhibiting the establishment of a further session with that capture device or a user associated with that capture device for a further predetermined time period once that predetermined time period expires. The step of inhibiting may be dependent on the capture location being located in a geolocation associated with the first session.
The method may comprise identifying the presence of a rectangle in a video stream, and processing the video stream to determine if the video stream comprises protected content, wherein the step of editing comprises inhibiting streams comprising protected content. The method may comprise identifying a deformed rectangle and processing the image to provide a rectified rectangle.
BRIEF DESCRIPTION OF THE FIGURES:
The invention will now be described by way of reference to the accompanying drawings, in which:
Figure 1 illustrates an exemplary architecture in which described examples may be implemented;
Figure 2 illustrates an exemplary process for generating metadata;
Figure 3 illustrates an exemplary system in which content is augmented;
Figures 4 illustrates an exemplary watermarking process ;
Figure 5 illustrates an exemplary system for watermarking;
Figure 6 to 10 each illustrate an exemplary aspect of a watermarking example;
Figure 11 illustrates an exemplary watermarking process ;
Figure 12 illustrates an example of a watermarking system;
Figure 13 illustrates an exemplary generation of metadata;
Figures 14 to 16 illustrate an example relating to grouping;
Figure 17 illustrates an example grouping process;
Figure 18 illustrates an example of metadata;
Figure 19 illustrates an example relating to watermarking;
Figure 20 illustrates an example related to prioritisation;
Figures 21 and 22 illustrate examples;
Figure 23 illustrates an example relating to grouping;
Figure 24 illustrates an example relating to priority;
Figure 25 illustrates an exemplary process for handling requests;
Figures 26 and 27 illustrate the exemplary augmentation of data;
Figure 28 to 31 illustrate the exemplary grouping of streams; and
Figure 32 illustrates the exemplary prioritisation of streams.
DESCRIPTION OF PREFERRED EMBODIMENTS:
With reference to Figure 1 there is illustrated a system architecture within which embodiments of the invention may be implemented.
With reference to Figure 1 there is illustrated: a plurality of devices, labelled capture devices, denoted by reference numerals 12a, 12b, 12c; a plurality of devices, labelled viewing devices, denoted by reference numerals 16a, 16b; a device, labelled editing device, denoted by reference numeral 20a; a network denoted by reference numeral 4; and a server denoted by reference numeral 2.
Each of the devices 12a, 12b, 12c is referred to as a capture device as in the described embodiments of the invention the devices capture content. However the devices are not limited to capturing content, and may have other functionality and purposes. In examples each capture device 12a, 12b 12c may be a mobile device such as a mobile phone.
Each of the capture devices 12a, 12b, 12c may capture an image utilising a preferably integrated image capture device (such as a video camera) , and may thus generate a video stream on a respective communication line 14a, 14b, 14c. The respective communication lines 14a, 14b, 14c provide inputs to the network 4, which is preferably a public network such as the Internet. The communication lines 14a, 14b, 14c are illustrated as bi¬ directional, to show that the capture devices 12a, 12b, 12c may receive signals as well as generate signals.
The server 2 is configured to receive inputs from the capture devices 12a, 12b, 12c as denoted by the bi¬ directional communication lines 6, connected between the server 2 and the network 4. In embodiments, the server 2 receives a plurality of video streams from the capture devices, as the signals on lines 14a, 14b, 14c are video streams .
The server 2 may process the video streams received from the capture devices as will be discussed further hereinbelow.
The server 2 may generate further video streams on bi-directional communication line 6 to the network 4, to the bi-directional communication lines 18a, 18b, associated with the devices 16a, 16b respectively.
Each of the devices 16a, 16b is referred to as a viewing device as in the described embodiments of the invention the devices allow content to be viewed. However the devices are not limited to providing viewing of content, and may have other functionality and purposes. In examples each viewing device 16a, 16b may be a mobile device such as a mobile phone.
The viewing devices 16a and 16b may be associated with a display (preferably an integrated display) for viewing the video streams provided on the respective communication lines 18a, 18b.
A single device may be both a capture device and a viewing device. Thus, for example, a mobile phone device may be enabled in order to operate as both a capture device and a viewing device.
A device operating as a capture device may generate multiple video streams, such that a capture device such as capture device 12a may be connected to the network 4
via multiple video streams, with multiple video streams being provided on communication line 14a.
A viewing device may be arranged in order to receive multiple video streams. Thus a viewing device such as viewing device 16a may be arranged to receive multiple video streams on communication line 18a.
A single device may be a capture device providing multiple video streams and may be a viewing device receiving multiple video streams.
Each capture device and viewing device is connected to the network 4 with a bi-directional communication link, and thus one or all of the viewing devices 16A, 16B may provide a signal to the network 6 in order to provide a feedback or control signal to the server 2. The server 2 may provide control signals to the network 4 in order to provide control signals to one or more of the capture devices 12a, 12b, 12c.
The capture devices 12a, 12b, 12c are preferably independent of each other, and are independent of the server 2. Similarly the viewing devices 16a, 16b are preferably independent of each other, and are independent of the server 2.
The capture devices 12a, 12b, 12b are shown in
Figure 1 as communicating with the server 2 via a single network 4. In practice the capture devices 12a, 12b, 12c may be connected to the server 2 via multiple networks, and there may not be a common network path for the multiple capture devices to the server 2. Similarly the viewing devices 16a, 16b may be connected to the server 2 via multiple networks, and there may not be a single common network path from the server 2 to the viewing devices 16a, 16b.
Also shown in Figure 1 is the editing device 20, connected to the network 4 by bi-directional communication link 22. The editing device, which may additionally function as a capture device and/or a viewing device, may be used to provide additional control information to the server 2.
Figure 1 further illustrates two metadata service blocks 11a and lib, each respectively connected to the capture devices 12a and 12b via bi-directional communication links. The operation of these blocks will be described further hereinbelow.
Generating Metadata
Capture devices such as devices 12a, 12b, 12c typically are used by a user (who may also be referred to as a contributor) to capture an event. For example a capture device may be equipped with a video camera, and the contributor may use the capture device to video content. The thus captured video content is transmitted from the capture device to the network as a capture stream. A mobile phone is an example of a capture device .
In addition to providing the capture stream comprising the captured data, it is known to provide metadata for data content. Typically the metadata is generated when a content stream is received. Thus typically a service operating on the server 2 may generate metadata for a data content stream which is received from one of the capture devices.
In a described example, the capture devices 12a,
12b, 12c are configured in order to generate the metadata for captured data, and then transmit this metadata to the
content service running on the server 2 in addition to the data content itself. Advantageously this means that the content service running on the server 2 does not have to generate the metadata itself, and thus processing resources for doing so do not need to be provided.
With reference to Figure 2, an exemplary process is described .
As denoted by step 82, a capture device (such as capture device 12a) captures images. This for example may be a video device associated with the capture device 12a capturing video images of an event taking place in the location of the contributor associated with the capture device 12a.
In a step 84 a data stream is generated for transmission on communication line 14a from capture device 12a, so that the data stream is delivered to the server 2 via the network 4. In examples the data stream may be utilised to provide a live data stream of the event being videoed, and therefore the captured data stream is transmitted as a live representation of the event .
As denoted by step 86, the capture device 12a generates metadata for the captured data stream. The metadata generated will be implementation dependent. For example the metadata generated for the data content may include a tag identifying the location of the capture device, a tag identifying the identity of the capture device, a tag identifying the identity of the contributor etc. In addition, the metadata may include time information, such as a start time associated with the data capture, and/or a time of day. The metadata may also include the termination time of the captured data,
or the duration time of the captured data. The capture device 12a may also be configured to allow the contributor to provide additional information to be tagged as metadata using a user interface of the capture device 12a. However metadata may be created and generated automatically.
As denoted by step 88 of Figure 2, the capture device such as capture device 12a may also transmit a request for data to a third party service, hosted on a separate device. As illustrated in Figure 1, capture device 12a may communicate with a data service 11a, and capture device 12b may communicate with a data service lib. The capture devices can communicate with such third party services and obtain additional data, and then associated metadata for the content is created by the capture device.
In one example, a third party service provided by data service such as 11a or lib may provide a transcription service. Voice recognition functionality may be provided by the service, in order to determine either what the voice content of the capture data is, or to determine the voice content of the contributor. For example, a voice signal of the contributor may be provided separately to the data service, and may be transcribed in order to provide additional data content for captured data. The addition (or augmentation) to metadata due to such additional content provided by such a service helps improve the discoverability of the data content when it is provided as a large pool of content at the server 2.
Augmented Data
Figure 3 illustrates the augmentation of data. The capture device, in this example the capture device 12a, is connected to the server 2 as in Figure 1. The capture device 12a is illustrated as connected to both data services 11a and lib of Figure 1. The data service 11a is a 3rd party speech-to-text conversion entity, and the data service lib is a 3rd party video analysis entity.
The capture device 12a provides an audio stream to the 3rd party speech-to-text entity 11a, and receives timed text back. The capture device 12a provides an audio-video (AV) stream to the 3rd party video analysis entity lib, and receives timed metadata back.
The capture device 12a may provide the original data stream (before augmentation) as a low quality (LQ) stream to the server 2, a high quality stream with extra metadata to the server, and a separate metadata only stream for late-arriving analysis to the server 2.
The metadata derived from the stream is sent back to the capture device as shown in Figure 3.
Transcription (I)
The system may use speech-to-text conversion to determine what in the captured video stream is being said. The system may send the captured video stream to a processing server to receive a transcribed version of the audio signal of the captured stream. This transcribed version of the captured video stream may be used to generate further metadata which reflects what has been said in the captured video stream. The generation of the further metadata can be accomplished at the capture device. A user of the capture device as well as someone
using a so-called director application on a second device used to curate the captured video stream which is to be published as a viewing stream may accept, edit or reject any transcribed text and/or the further generated metadata before it is accessible by the viewing stream.
When generating such further metadata the reach (i.e. discoverability) of the captured video streams can be broadened without any significant resource overhead. This method may improve the accessibility of the captured video stream which, for some provider or publisher of the captured video stream, may be highly desirable or, in some cases, a legal requirement.
Storing transcribed text and the further metadata generated based on the transcribed text also makes the video stream more searchable either in real time for viewers to find streams live right now or, where streams are saved for on-demand playback, when searching the archive .
The method may also be used by the user of the capture device, which might be a contributor, to broaden the reach of the captured video streams. The method may help to improve discoverability when a user of a capture device is contributing captured video streams to a large pool of material.
The method may also be used to enable the consumption of the viewing stream without necessarily having the audio on. Certain kinds of viewers of video streams are watching while listening to other things (e.g. music) . By adding the audio transcription automatically to and/or generating and adding further metadata reflecting the audio transcription of live content, there may be brought a higher level of
convenience to the consumption of this type of viewing stream. It may be easier for viewers to find live or on- demand content if it can be searched by what is being said .
Thus, speech recognition may be used to determine what any given contributor is saying. Each such contributor, or someone using a so-called irector app' to curate a published stream may accept, edit or reject any transcribed text before it is accessible to viewers.
This broadens the reach of the content without any significant resource overhead. This improves the accessibility of content which, for some publishers, may be highly desirable or, in some cases, a legal requirement. Storing transcribed text also makes content more searchable either in real time for viewers to find streams live right now or, where streams are saved for on-demand playback, when searching the archive.
For a contributor, this broadens the reach of the content, and improves discoverability when a user is contributing content to a large pool of material (e.g. a citizen reporter use case)
For an end user, this gives ability to consume content without necessarily having the audio on. Certain kinds of content are watched while listening to other things (e.g. music) . Adding audio transcription automatically to live content brings this level of convenience to the consumption of this content type. Easier to find live or on-demand content if it can be searched by what's being said.
Transcription (II)
Cloud sourced translation of content may be used as
a method of creating metadata.
A similar approach could be taken where the system may allow the process of the captured video stream being provided to a distributed network of foreign language speakers who are able to give live translations. This may be used to augment the original captured video stream either with spoken word translations or subtitles. This may be useful for broadcasters using the system to offer alternative language and to broaden the reach with less risk due to up-front costs associated with hiring staff. The method may be used to generate a large quantity of live content (e.g. news, current affairs, business) and to offer the captured live video streams to viewers in a number of languages. It may be used to offer them in combination with further metadata that provides the information about the captured video stream in alternative languages. The method may also be applied to augment captured video stream and the associated metadata with subtitles and/or audio description and to provide them to viewers who are hard on hearing or with sight problems .
A broadcaster generates a large quantity of live content (e.g. news, current affairs, business). They wish to offer this content to viewers in a number of languages but lack the resources to hire dedicated staff to provide real-time translations. While the live content itself may or may not be distributed to end-users via our system, the content could be provided to a distributed network of foreign language speakers who are able to give live translations. These contributions can then be used to augment the broadcaster' s original language offering either with spoken word translations or subtitles. This
adds the ability to cloud-source different languages.
For a customer, this alters the ability to offer alternative languages with no up-front costs associated with hiring staff and broadens reach with less risk.
For an end user (contributor) a foreign language speaker gets to use their skill to improve content using simple tools.
For an end user (viewer) this provides an option to enjoy content in their own language irrespective of the source language.
Another example of a metadata service which may be provided by blocks 11a or lib is a cloud sourced translation of content. If, for example, a broadcaster generates a large quantity of live content (such as news, current affairs, business) , they may wish to offer this content to viewers in a number of languages. However they may lack the resources to hire dedicated staff to provide real time translations.
While the live content itself may be provided into the content service of the server 2, the content may also be distributed to a network of foreign language speakers who are able to give live translations. These contributions can then be used to augment the broadcasters' original language offering either with spoken word translations or subtitles.
As denoted by step 90, the capture device receives a response to its request to third party services, and includes any response it receives in the metadata for the content being streamed.
As denoted by step 92 the capture device transmits a data stream, and in step 94 the capture device also transmits a metadata stream.
The metadata stream may be transmitted in conjunction with the data stream, for example using multiplexing. Alternatively the metadata may be transmitted on a stream separate to the data stream. The metadata may also be transmitted on multiple separate streams in addition to the data stream.
The metadata and the captured data may be synchronised, particularly where the metadata is automatically generated. Thus the server 2 is able to recover any synchronisation information, and synchronise the metadata with the associated data content.
The metadata and captured data may be associated with a common timeline, and the server 2 is able to determine this timeline and synchronise the captured date and the metadata. The timeline may be provided by the captured data stream or from one or more of the metadata streams .
The metadata associated with data content is utilised by the content service associated with the server 2, in order to generate viewing streams for viewing devices 16a, 16b. The captured data streams from the capture devices are thus routed to the viewing devices in dependence on the metadata associated with the capture data streams.
The viewing device may be a moderator of content or an editor of content, which uses the metadata in order to moderate or edit the content provided on a viewing stream.
Watermarking
In addition to generating the metadata for the content data stream, a capture device may also provide
additional control level information for the content data stream by, for instance, using "watermarking". The watermarking is a type of metadata, and may be used, for example, to provide control level information.
This is further explained with reference to Figure
4.
In a step 100 the video is captured. In a step 102 a video watermark stored in the capture device is applied to the video. In step 104 the video is transmitted in a data stream with the watermark applied. The watermark provides additional metadata for the content.
As shown in Figure 5, a capture device such as capture device 12a may additionally be provided with a video camera 70, a content capture monitor module 92, a video capture module 76, a mixing module 80, a memory 82 including a watermark store 84, and a wireless transmission module 90.
The video camera 70 is the video camera of the capture device which films an image, and it generates a signal on line 74 to a video capture module 76. The video capture module 76 generates the video images on line 78 to the mixing module 80.
In addition the mixing module 80 receives watermark information from the watermark store section 84 of the memory 82. The mixing module 80 is therefore able to generate a copy of the video images on signal lines 88 which include the watermark, which may be referred to as watermarked video. The watermarked video is provided to a wireless transmission module 90 for transmission as the data stream.
The watermark encodes information that identifies the device on which the content was captured, the owner
of the device, and a moment-by-moment reflection of the time at which the content was captured.
All streams are watermarked. All streams have metadata that describes the content. Rights are assigned to streams that have particular metadata characteristics. These rights are checked by detecting the watermark and determining which rights were assigned.
Video Watermarking
Rights to content originating from certain public events such as sports matches, tournaments, music concerts and festivals, amongst others, are typically held by broadcasters, publishers and other similar organisations .
The proliferation of live-streaming apps on consumer devices however has made it increasingly easy for people to capture and distribute content that impinges on these rights. While some users do this unwittingly given the simplicity, convenience and fun of doing so, others do so maliciously .
Whatever the intention, these streams will necessarily dilute the rights holders' own output and may even damage the brand of the publisher or of the event itself .
Clearly, users are not required to use the described content delivery application to capture live content and they may use a competing platform that does not check for potential rights infringements. Moreover, content that did originate in the described content delivery application, and which was approved, may be legitimately redistributed elsewhere (e.g. someone reposts an authorised clip on Facebook)
It would be beneficial therefore if the rights
holder had some way of determining whether a suspect live stream, recording or clip originated from an approved stream or not. Unapproved streams could be flagged as potentially infringing and a take-down request issued.
Moreover, with the growing popularity of live streaming, it would be beneficial if the servers manipulate the video data as little as possible as this would become cost prohibitive.
Watermarking is a process by which content is altered in such a way that, while imperceptible to the intended audience, is detectable by specialised software.
Such watermarks can be based on a manipulation of the audio, video or both. In our case, adding a watermark to the video seems the most sensible since this is the primary sense of the content and, therefore, the part most likely to need protecting.
Traditionally, watermarks are added to content using custom hardware designed specifically for that purpose. But, as highlighted in the introduction, providing this as a server-side function for the multitude of live streams the users produce would not be practical.
Our intention here is, instead, to create a system by which the recording device (the smartphone or tablet) adds the watermark itself.
This creates some additional technical challenges which need to be overcome. Importantly, the watermarking process must be lightweight enough not to create a significant processing overhead on the consumer device while being computationally complex enough to be difficult to fake.
While it is important that watermarks cannot easily be removed, it is similarly important that the watermark
be difficult to transpose from a genuine stream to an unapproved one .
Traditionally, a malicious user would wish to remove watermarks entirely, but in this scenario in order to avoid a take-down notice, a watermark needs to be present .
Modern consumer devices have GPUs that rival those found in many desktop and laptop computers and, as such, it would be useful if any video manipulation performed by the device could be achieved by a combination of GPU operations. This would reduce overall processing load, but also optimise the watermarking process.
It is similarly important to ensure that any watermark added would survive a transcode or re-encode process.
A popular approach that fits these requirements uses variations in the chrominance of the video. This process is well known and is not covered here. Essentially, of the two parts of an image - chrominance and luminance - changes in chrominance are perceived least by human sight. As such, it is a useful property to vary in order to embed information or markers.
Moreover, consumer devices all provide fast GPU- based routines for manipulating it making the technique practical.
Having watermarked each stream, the server maintains a list of those streams which have been categorised as belonging to a rights holder' s event and, further, which have been approved or not.
Streams not belonging to any event are allowed through en masse, as are those which are assigned and approved. Those which are not approved are not published.
When a piece of content is discovered on the Internet, in whatever form, it can be analysed to determine if it may belong to a rights holder's event. The techniques for doing so are covered elsewhere.
Identified streams are then checked for a watermark.
Streams without watermarks are either submitted for manual checks to ensure they are of the event before a take-down notice is served, or used to issue such notices immediately if there is a high level of confidence in the event inclusion assessment.
Streams with watermarks can be checked to see if that watermark was an approved one for that event. Unapproved ones follow the process described immediately above; approved ones are fine to be left alone.
In this way, the system offers an automated process for watermarking, grouping streams into events and validating content as either having been expressly permitted or not.
This gives rights holders a means by which they can keep on top of the huge numbers of UGC minutes being created that currently go unchecked, unmonetized and unmanaged .
Adding watermarks to Video Content
Watermarks are preferably added to audio-visual (AV) content at the point of capture by the capture device itself, e.g. by an iPhone. See the architecture sketch at the end.
Watermarks are small amounts data embedded into the audio, video or both aspects of captured content. They are added to the content in such a way that they are invisible/inaudible to the audience, difficult to remove but readily detectable technically. Technologies for
applying watermarks already exist and typically operate by embedding at a rate of a number of bits per second. So a watermark that is 512 bits long will need twice as much content for one embedding than a 256 bit watermark.
The watermark data in our case comprises two elements: a watermark identifier (WID) and a timecode (TC) . Each is included with every embedding, with only the TC portion changing. Watermark Identifier
Each stream has a unique watermark identifier (WID) which is calculated as follows:
sha-256 (deviceld + userld + streamld + startTime) This is illustrated in Figure 6.
This creates a small amount of data, 256 bits, that can be used by the chosen watermarking technology to embed the watermark in the content.
A hash of the values is used, rather than the values themselves, for two reasons:
1. Using each value directly would result in too much data for the watermarking technology to embed quickly; and
2. Since watermarks are intended to be easy to detect, using the values themselves might allow too much personally identifiable information to be read by unintended users .
Privacy
For any user who picks up their mobile device to shoot content, the possibility that what they capture may be claimed by a Rights Holder might, at first, be an uncomfortable prospect.
Clearly, there are matters of user privacy to consider but also the potentially overbearing impression that claiming streams for Rights Holders may create.
To alleviate these concerns, the watermarking capability has a number of built in safeguards.
Firstly, the WID does not contain any information that would allow the specific user to be identified, nor can the information encoded into it be reverse engineered to reveal such data.
Secondly, the system always makes the initial assessment of whether a stream may fall under a Rights Holder's control. If the stream's metadata (e.g. time, location, content of the video) suggests that the stream may be one to protect, only at this point may a human moderator acting on behalf of the Rights Holder get involved .
Thirdly, moderators are only shown the stream itself. No personally identifiable information about the user shooting the content, other than anonymised data regarding their reputation/standing/rating.
Fourthly, moderation may, in some cases, be accomplished by entirely automated processes.
Fifthly, when the system makes its initial assessment of potential Rights Holder control, and again if a moderator affirms this, the user is shown a warning message to communicate this on their capture device (as illustrated below) . This shows the name of the Rights Holder, the nature of the rights claim, any supplemental information about how the content may be used and a button to allow the user to stop shooting content if they are unhappy for the content to be used in this way. If they are not, then the stream is stopped and any
recording of it is discarded. This is illustrated in
Figure 7.
Timecode
The watermarking technology embeds the WID, which does not change throughout the entire span of a continuous stream of content together with a timecode. The TC value begins at zero at the start of the stream and measures the offset into the stream at which each successive watermark is embedded. This is illustrated in Figure 8.
How are watermarks used? When a capture device begins capturing content, the WID for this stream is calculated and sent to the server as another item of metadata to associate with it. This is illustrated in Figure 9.
If, by whatever means, a Rights Holder determines that a stream comprises or contains content that they own then a record is kept to indicate this. This record includes at least the event to which the Rights Holder claims the content belongs, the stream in question, its watermark identifier and, optionally, any start and end times within the stream demarcating the section of content to which the Rights Holder claims ownership. This is illustrated in Figure 10.
Using watermarks to Prove Ownership
Suppose a Rights Holder is somehow made aware of a piece of content that they believe originated from a stream they marked as belonging to them but which is being distributed without authorisation. How can they assert ownership and begin the process of having the unauthorised content taken down?
By analysing the suspect content for watermarks, any such detected watermark can be detected and read off. This can be compared against the lists of watermarks for which the Rights Holder asserts ownership, as compiled above .
If a detected watermark is included in any such list, then the rights holder can demonstrate that the clip was claimed by them as it passed through the describe content delivery system, may reasonably exert ownership rights and begin the takedown process. It may also be used to identify the user and source device if this is useful in determining the origin of the unauthorised distribution - it may not always be. This is illustrated in Figure 11.
Figure 12 shows the capture device architecture needed to apply watermarks to content.
Thus it can be understood that watermarking is a type of metadata, which may be used for example to provide control level information.
With reference to Figure 13 there is illustrated an exemplary implementation of part of the arrangement of Figure 1. The server 2 of Figure 1 and one of the capture devices 12a of Figure 1 are shown in Figure 4. Additionally shown is a device 30.
In accordance with a preferred embodiment, the capture device 12a is configured to generate the metadata for a data stream which it captures. The capture device 12a is able to communicate with the device 30, if necessary, to obtain additional metadata associated with the captured data stream.
The capture device 12a is then able to generate the data 34a, associated with the for example a captured data
stream, on signal line 34a, and its associated metadata on signal line 34b.
The data and the associated metadata on lines 34a and 36b are received at the server 2 as denoted in Figure 2. As such, the server 2 does not have to generate the metadata for the data, and this task is carried out by the capture device which then simply transmits the metadata to the server.
In the example shown the metadata and the data are transmitted in two separate streams. In alternatives, the metadata can be combined with the data from transmission in a single stream, or there could be multiple data streams and or multiple meat data streams.
Synchronisation by Marker
Synchronisation can be achieved in some cases by looking at two or more streams, identifying a common audio or visual feature and aligning on those. The placement of these AV features could be part of the metadata (e.g. Player 10 joins the game at 01:13:42 on Stream A; 00:15:22 on Stream B; 00:40:11 on Stream C.
Therefore these three streams can be aligned at this moment)
In summary, two or more media assets with different timelines may have those timelines aligned by comparing their metadata timelines.
Any assets whose metadata timelines contain related items may be aligned at this point.
Further, if assets A and B have a common metadata timeline item, and B and C have a different common metadata timeline item, then all three may be aligned even if A and C have no such item in common.
In the example architecture described it is
necessary to synchronise the clocks on a collection of devices indirectly connected to each other over an unknown, unknowable, unreliable network (i.e. the internet) . The following describes a technique for achieving clock synchronisation that is accurate.
Events that capture content using a high frame rate or where frame-accuracy is important may need a carefully attuned clock such that the order of magnitude in the uncertainty is manageable.
Any solution that relies on a device contacting a server over the internet will run into the issue of being able to measure the round trip time between the two but being then unable to break this down into outward and return journey times - information that is vital to being able to synchronise exactly.
In order to improve on this accuracy for devices that are near each other we must therefore look elsewhere .
A simple example is now set out.
Imagine two iPhones next to each set-up to capture video. Their clocks are not synchronised and they are connected to the internet over an unstable cellular connection making their NTP assessments significantly inaccurate .
Consider that they are both looking at a digital clock that shows the time in hours, minutes and seconds.
A remote viewer, receiving the streams from the two phones, could very easily align them so that they were synchronised simply by observing the times shown on the clock and by watching for the moment the seconds value ticked over.
We can use a similar, albeit more sophisticated
approach here.
The goal of synchronisation is not to align a set of capture devices with a remote clock - that's one method by which things can be synchronised, but it isn't the objective. The goal is to play out each captured stream so that as each frame plays out, it is shown alongside frames from other streams captured at the same moment.
In this way, if we can somehow align all the capture devices, their synchronisation (or lack thereof) with the time at the central server is largely irrelevant.
To achieve this synchronisation, we must look for markers that are common to each stream, or a series of markers common to overlapping subsets of the streams.
In our facile example above, the digital clock was the marker, but in a real-world situation, a marker might consist of a TV image caught in the background, the position of a moving object a camera flash; or a sound such as the start of a song, a round of applause, the text a speaker is reading, a starting pistol; or an electronic beacon like a Bluetooth ping.
Each capture device, being situated in the same time domain as the marker can safely assume that they heard/saw the event at the same time. Having identified and described the marker, they can compare the state of their clocks at this moment and calculate the matrix of their relative offsets.
This matrix need only be evaluated once for any unique selection of capture devices. It would only need to be done again if a new device joined the event.
Sound markers may be picked out using frequency spectrum analysis with the profile of interesting events used to compare markers between devices.
Video analysis, as described in No. 6, could likewise be used to achieve a similar result.
Bluetooth pings offer a unique possibility in that they can be produced by a device at will rather than having to wait for a marker to occur naturally. Moreover, particularly with Bluetooth Low Energy, timings for data in transit are well known.
While such broadcasts are limited in range, it would be possible to create ad-hoc cells of nearby devices which synchronised amongst themselves first and then between cells secondarily.
Sound is the least reliable marker source when used in large open spaces as the speed of sound becomes a significant overhead.
Text recognition within video data could be used to look for digital clocks appearing in frame.
Earlier, I mentioned the possibility of using a number of overlapping markers to synchronise a large number of devices. A set of devices can be set to form a synchronisation group if they share access to a common marker, but that some members of the set may also share markers with other groups and in this way synchronisation between groups/cells may be achieved. Grouping Using Metadata
The architecture of Figure 1 allows consumers to broadcast live video captured on their mobile handsets (phones, tablets and so on) . Often these broadcast sessions are short, informal but topical, personally meaningful or entertaining and, as such, they are shared with friends and followers - usually automatically - on social media.
However, while this makes it easy for a consumer' s connections to watch along in real time, it does little to allow these moments to propagate any further.
Moreover, the casual nature of video capture makes it easy to record material that might infringe upon the rights held by a more traditional broadcaster. Sporting events are a good example of this.
There are therefore two competing problems: how to identify groups of streams as belonging to a particular event but also being able to pick out from these identified streams those which potentially infringe on pre-existing rights held by another.
A typical set of use cases for live streaming includes public events where a large number of people are gathered together - sports, music concerts, school/university events, parades etc...
In these situations, while each contributor may create shareable moments captured by their own device and passed around their own connections, it may be useful for a viewer to be able to engage with the event as a whole and watch content contributed by anyone present.
Similarly, for events tagged as belonging to a rights holder, these overlapping video sequences would be a valuable source of user-generated content that could be moderated for compliance with the protected brand' s guidelines, monetised directly or used within traditional broadcasts .
Given a technique that can aggregate streams in this way, an event could be effectively packaged as an aggregate of each user's micro-broadcast contributions.
An added advantage is that moments which, on their own, might seem trivial and banal may take on extra
colour and relevance when slotted into the bigger picture around them.
It is thus desirable to address how a collection of user generated live streams can be analysed such that it is possible to identify from amongst them one or more events, and how a rights holder can describe an event they control to the system such that it is subsequently able to assess identified events as being potentially infringing so that this content can be managed by the rights holder.
The first issue to address in seeking to provide such a feature set is being able to identify clusters of user generated content as comprising an event, and thus place data streams into groups.
It is not necessary for the system to know about expected events in advance - the example creates a system that can pick out events from a collection of data streams using only the data available from these streams and the devices capturing them.
An event can be said to have both geographic and temporal boundaries. In this way, while the system may identify a dozen streams coming from a given location, if they are not tightly bound to a specific time period then this collection of content cannot indicate the occurrence of an event. However, a dozen streams, for example, from a certain locale within a short time period might imply the existence of an event. The important point here is that the identification of a meaningful cluster of contributions is based not just on geographical bunching but also by time.
There exist many algorithms for identifying the presence of data point clusters, but one exemplary
algorithm which may be used is a three-dimensional DBSCAN (where the three dimensions used are latitude, longitude and time) . This allows for non-uniform, asymmetric clusters to be identified and is therefore better suited to identifying likely groups of people. Using time as the third dimension is acceptable as it is a continuous, rather than a discrete, quantity.
One adaptation to DBSCAN that it is needed to allow the time variable to have an extent instead of a single value. Techniques to permit this exist.
Examples can be readily envisaged where independent users (independent content creators) can be located together. For example, a baseball field where spectators are unevenly distributed largely over only three sides of the field. A cluster here might resemble a horseshoe. Alternatively, a festival where attendees are concentrated in pockets around different stages or performance areas presents a different pattern of user densities. Similarly, a linear race that does not lap would comprise a locus of connected areas of similar density. Such a scan allows these individual events to be picked out from the surrounding noise.
The system can identify the likely contributors, and then create a macro-event to collate the individual streams into a group so that a searching viewer can pick up on the full set of clips.
If the content is only ever live, i.e. no streamed data is ever retained for later playback, then the first contributions that permit the system to identify the event may not be seen by searching viewers if they are over and done with by this point. However, with a density algorithm like DBSCAN, even data points
representing expired video sessions are still valuable in defining and bounding an evolving event.
Whilst examples are described with reference to live streams from live events, the described techniques are not limited to streams that are live and live only. If the system supports the recording of live streams, then any stream identified as being part of an event can be watched whether currently live or not. Moreover, it would then be possible to recap an entire event using a montage of identified streams.
Viewers or rights holders searching for content can subscribe to an identified event and watch live streams as they become identified as belonging to it, by accessing the appropriate group. In this way, the event is not constrained by the end of any one particular contributor' s stream, and the viewer can choose to watch any such stream or not. Streams may also overlap.
There are certain locations in which people often gather together for one kind of event or other. This includes stadia, conference centres, transport hubs, shopping malls, festival areas and so on. Past events, having been identified, can be used to improve the speed with which future ones are detected. This may not allow the system to work out the type of event (e.g. the difference between a football match on one day and a music concert on another day at the same stadium) , but the ability to mark streams as likely belonging to an event sooner rather than later would ensure that more streams get correctly identified and included sooner.
Streams belonging to the event, identified sooner, means that the customers gets to start showcasing their event to viewers quicker. Early streams that might have
been missed or excluded from the event are now more likely to get correctly identified and tagged meaning that a customer's 'event pages' carry a more complete picture of the event.
Content that should be featured in an event owner' s content pages will not be overlooked. More viewers for content are obtained when it becomes included within an event owner's pages. More chances of kudos at being featured by the event owner. Fewer missed moments from an event.
A further technique to determine whether a particular contributor' s stream belongs to an event or not may involve the use of a GPS device, gyroscope device, and an accelerometer device of contributing users' devices.
While capturing video, location and location data, the consumer device should ideally also record the direction and scope of the capture device's field of vision, preferably no less frequently than once per second. This additional information can be added to the metadata of the data stream.
By tying this information to the timed captured video data, it is possible for the system to further take into account the point or focus of points that may form the focus of a group's attention within a period of time.
In a music festival example, comprising four stages arranged around a central area, the central mass of festival goers would, using only the DBSCAN technique noted above, be considered a single cluster. This example is illustrated in Figure 14. The Festival environment 112 comprises an upper stage Stage A denoted 110a, a left stage Stage B denoted 110b, a lower stage
Stage C denoted 110c, and a right stage Stage D denoted HOd.
A small number of nearby contributors would be picked out from the audience. Collating video clips and streams from the entire group would create a confusing jumble of video taken of all four stages.
However, if the field of view of each contributor is also taken into account, then what was considered as one cluster is actually identified as being multiple clusters. Taking the same set of nearby contributors, one subset is seen likely to be looking at the stage on the left, Stage B 110b, whilst another is seen as looking likely at the stage at the top, Stage A 110a.
This is illustrated in Figures 15 and 16. Each capture device 114a to 114i has a corresponding point-of- view or field of vision denoted by reference numeral 116a to 116i respectively.
Being able to further describe each contributor in this way allows the contributed video, or portion thereof, to be attributed to the correct sub-event within the event .
Using this as a fourth variable in the DBSCAN analysis, it is possible to differentiate between contributors and, in so doing, maintain event integrity by more accurately speculating as to which event any given contribution belongs.
Of course, the data will contain some noise, but this fourth variable being an angle is just as usable within DBSCAN as latitude and longitude.
It may also be useful to extract from the content two other types of real-time metadata: namely, what is heard being discussed and what is seen appearing in the
video .
This audio-visual analysis may be valuable in two ways. Firstly, to provide a running set of keywords or tags that describe the content and which in turn aids both discoverability and the system' s ability to include or exclude a stream, or portion thereof, in an event. Secondly, if a rights holder has a declared interest in content arising from an event, knowing what is actually being captured allows for a more accurate assessment of whether a particular stream infringes.
On this second case, consider an example before looking in detail at how a rights holder might declare an interest in certain content.
At a football match a user broadcasts his friends enjoying a beer in the bar. While his geography, camera angle, and time might all suggest that his stream ought to be managed by the entity that owns the rights to the football match, clearly the content of the stream indicates otherwise. An evaluation of the content of the stream would allow the system to make this assessment.
It is well-known that automated semantic analysis of text yields a list of topics, entities, themes, moods, etc... The same analysis can be performed in real-time with live video by using a speech-to-text conversion. While the spoken word is structurally different from written language the aim here is not to deduce the nuances of what is being said, but rather to use a xbag of words' approach and highlight merely what is being mentioned.
Similarly, there exist services that analyse video content for the appearance of recognisable objects, traits, settings etc... that can be used again to create a live stream of metadata that describes the content.
So as described above, a wealth of as-it-happens metadata about a collection of live streams can be created. This can then be used to process the streams.
When analysing a collection of streams, location, time and direction all play a part, but so too does a coherence of subject matter. Going back to the music festival example above: one of the users capturing video might be recording herself speaking to camera. Being able to identify this characteristic of her stream would allow it to be separated out from those around her and excluded from the events focused on the performance areas .
Similarly, within a group of users filming a car race some users may turn away from the action during recording, or may be tracking a particular car. Knowing what happens in the video, the system can pick out just those contributors who are looking at the race or even at particular participants in the race.
With regard to the problem faced by rights holders, it is possible to see how the same approach can be used to pick out potentially infringing content from that which is likely not.
A rights holder is not encumbered with the task of reviewing every live stream that pops up in the location at a given time and checking it for infringement. While the techniques described so far would slim this list down, there is still the potential for a large number of streams to check.
When a rights holder wishes to declare an interest in an event, a way for them to be able to describe the parameters of this event beforehand is provided. This includes the ability to define a region within which the
event will take place as well as time limits for its start and end. However it also includes the ability to provide a list of tags that can be used to match against content. By defining collections of tags that content must, may, ought not and must not contain in order to be included in the rights holders event, it is possible for much of the review process to be automated.
With reference to Figure 17, there is illustrated the steps in the process for generally grouping data streams, and in particular for grouping data streams as part of events.
In a step 120 a data stream is received at the server 4. In a step 122 the metadata for the data stream is also received at the server 2. The metadata may be received with the data stream from the capture device, or may be generated in the server.
In a step 124 geographical data of the capture device is retrieved from the metadata, and in a step 126 temporal data of the data stream is retrieved from the metadata.
In a step 128 it is determined whether an event is identified. An event may be identified when a plurality of data streams are determined to be located within a particular proximity of each other, and to have been generated within the same timeframe.
If an event is not detected, then the data streams are grouped in a step 140, in accordance with their metadata .
If an event is detected, then in step 130 the event is created and the appropriate data streams are allocated to that event, effectively being grouped according to the event .
In a step 132 any content which has previously been stored and which is also associated with the event is retrieved .
In a step 134 point-of-view information or field view information is retrieved for the data streams from the associated metadata.
In a step 136 metadata related to audio-video analysis of the data streams is retrieved.
In a step 138 the streams within the event are then grouped according to this additional metadata, to present groups within the streams.
With reference to Figure 18, there is illustrated exemplary metadata 140 for a data stream. As shown in Figure 9, this exemplary metadata 140 includes: capture device location (GPS data) 140a; captured stream start time 140b; captured stream end time 140c; capture device gyroscope data 140d; capture device accelerometer data 140e; captured stream audio analysis 140f; captured stream video analysis 140g; and captured stream watermark 140h. Additional information may also be included within the metadata according to the implementation.
As discussed above, the metadata associated with the content data may include a video watermark.
With reference to Figure 19, there is illustrated a portion of the architecture of the server 4. An
interface 100 receives the video stream which has been transmitted by a capture device. The interface 100 provides the video stream (including the watermark) to a watermark identification module 102, and to a
buffer/filter module 106.
The watermark identification module 102 retrieves the watermark from the video stream including the
watermark, and provides this retrieved watermark as an input to the comparator 104. A memory 108 also provides the stored watermark, which includes a store of the watermark equivalent to the watermark store 84. The comparator 104 thus compares the received watermark from the video stream with an expected watermark from the memory 108. If a match is indicated, then the comparator 104 provides an appropriate signal to the buffer/filter 106, with the buffer/filter 106 outputting the received video stream responsive to confirmation from the comparator that a match has been received, and thus the received signal is carrying the appropriate watermark.
The buffer/filter 106 may provide three outputs, in an example, which indicate that a "rights holder" associated with a video stream accepts the content as being part of their event and approves it, that they accept the content as part of their event but reject it, or that they may consider that it does not belong to their event and forward for further processing.
Thus incoming video streams at the server 4 may be grouped at a video streaming server. The grouping may be provided based on watermark recognition. The grouping may also generally be provided based on metadata. The described examples are not contingent on metadata having been generated by a capture device and received at the server 4 from the capture device
The examples utilises metadata associated with a content data stream, but the metadata may be generated within the server 4 itself, or may be provided by a capture device which also provides the captured content stream.
As described, watermarks can be added to content at
the time of capture, and are a form of metadata of the embedded variety. If each piece of content contains a unique collection of metadata embedded in this way, it becomes possible for a further system to use this metadata to uniquely identify each piece of content.
As such, it is possible to describe any segment of content using a combination of this identifying metadata and a pair of timecode values (a start and an end) . This forms a segment descriptor.
A server may be configured to store collections of these segment descriptors. The server operator may create collections of segment descriptors for any purpose, but one such example use is for each collection to refer to segments of content that belong to a particular user. A further use might be to collect together segments that contain related content. A further use might be to collect together segments to which should be applied particular consumption limitations (rights management) .
A further system may receive content from the internet, or any other source, and check if it contains embedded metadata that uniquely identifies it. If so, this system can determine if the source content ought to have had consumption limitations applied to it and, further, may assess whether those limitations have been breeched .
In so doing, this further system may be used to police breeches in consumption limitations.
In summary, this is a technique that allows embedded metadata (a watermark) to be used to determine if a piece of content was categorised as belonging to one or more groups .
Grouping can be used to enhance the delivery of content, with discovery being content driven. This allows content to go in search of viewers rather than assuming viewers will find it. This is especially important for live content that may not last long, or may be from a new contributor. In these cases, it is essential that content creators begin to see that their work is getting the best possible exposure so that they are encouraged to keep creating.
As viewers watch and engage with content, the system may maintain a list of characteristics that it thinks each viewer likes. It might do this by using the tags, descriptions, locations, or events a stream belongs to or specific metadata from the moments at which the viewer taps to say they like a stream. By whatever means, the system is able to track each user' s tastes and preferences over time.
When a new stream begins and metadata about it becomes available, the system will match it to viewers who have liked similar things before. By sending a notification to those users to come and watch the new stream, an audience can be mustered for any stream. This means that the potential audience for a stream is not just the contributors' followers or friends, as is the case with existing platforms, but indeed anyone who might enjoy it whoever and wherever they are.
Topics/tags/themes/creators that are particularly interesting to a viewer can be followed. This gives the system an explicit signal to curate future content for the viewer that matches the topic/tag/theme or which comes from a followed creator.
For events that give rise to a large number of user-
generated streams, this approach to discovery allows the customer to reach viewers and bring them to the content in a more meaningful, more personally relevant way. A viewer is not simply brought to the event's landing page within the content service but is, instead, brought to a stream that suits their tastes exactly.
From the foregoing it can be understood that a group of users can create an event into which they all contribute one or more live streams of content. Each member of the group may watch one or more of these streams. Guests, who can watch but not contribute, may also be invited to the event. It can be also initiated from one party (an event holder) requesting users to participate .
Use cases for this grouping include: corporate communication; panel events; private meetings where participants are not in one location; request for contributors to an event.
Grouping provides: secure communications; guaranteed quality of service, perhaps an optional SLA that makes the service reliable enough for mission-critical communication (e.g. the Delta use-case).
For the end user (contributor) , grouping provides reliability; and is easier to use than person-to-person calling, content is added to the event.
For the end user (viewer) , grouping provides a feel of being present at the event; and access to each contributor's stream.
Grouping may be adaptive. Adaptive grouping determines who should be invited to share that event as a contributor. In addition to the standard ways, geolocation of devices that indicates that a user is
present can initiate a request to share the event. Different ways to define the grouping may be provided, e.g. 1.) the normal clear definition of members; or 2.) an open list whereas users matching with certain criterion (presence, profile, etc.) could define an adaptive list of potential contributors.
Grouping Streams
There exist several platforms that allow consumers to broadcast live video captured on their mobile handsets (phones, tablets and so on) .
Often these broadcast sessions are short, informal but topical, personally meaningful or entertaining and, as such, they are shared with friends and followers usually automatically - on social media.
However, while this makes it easy for a consumer' s connections to watch along in real time, it does little to allow these moments to propagate any further. Moreover, the casual nature of video capture makes it easy to record material that might infringe upon the rights held by a more traditional broadcaster. Sporting events are a good example of this.
We therefore have two competing problems: how to identify groups of streams as belonging to a particular event but, likewise, being able to pick out from these identified streams that those which potentially infringing on pre-existing rights held by another.
A typical set of use cases for live streaming includes public events where a large number of people are gathered together - sports, music concerts, school/university events, parades etc...
In these situations, while each person may create shareable moments captured by their own device and passed
around their own connections, it would be useful for a given viewer to be able to engage with the event as a whole and watch content contributed by anyone present. Similarly, for events tagged as belonging to a rights holder, these overlapping video sequences would be a valuable source of user-generated content that could be moderated for compliance with the protected brand' s guidelines monetised directly or used within traditional broadcasts
Given a technique that can aggregate streams in this way, an event could be effectively packaged as an aggregate of each user's micro-broadcast contributions.
An added advantage is that moments which, on their own, might seem trivial and banal may take on extra colour and relevance when slotted into the bigger picture around them.
So the problems are: how can a collection of user generated live streams be analysed such that it is possible to identify from amongst them one or more events; and, how can a rights holder describe an event they control to the system such that it is subsequently able to assess identified events as being potentially infringing so that this content can be managed by the rights holder?
The first issue to address in seeking to provide such a feature set is the challenge of being able to identify clusters of user generated content as comprising an event .
It should not necessary for the system to know about expected events in advance - the idea is to create a system that can pick out events from a collection of streams using only the data available from those streams
and the devices capturing them.
Before looking at the techniques involves, it would be useful to begin by defining an event, technically.
An event can be said to have both geographic and temporal boundaries. In this way, while the system may identify a dozen streams coming from a given location, if they are not tightly bound to a specific time period then this collection of content cannot indicate the occurrence of an event.
However, a dozen streams, for example, from a certain locale within a short time period might imply the existence of an event.
The important point here is that the identification of a meaningful cluster of contributions is based not just on geographical bunching but also by time. Ten drops of water a year does not a shower make; ten a second does !
There exist many algorithms for identifying the presence of data point clusters, but one of use here might be a three-dimensional DBSCAN (where the three dimensions used are latitude, longitude and time) . This allows for non-uniform, asymmetric clusters to be identified and is therefore better suited to identifying likely groups of people. Using time as the third dimension is acceptable as it is a continuous, rather than a discrete, quantity.
One adaptation to DBSCAN that we do need, however, is to allow the time variable to have an extent instead of a single value. Techniques to permit this do exist.
Imagine, for example, a baseball field where spectators are unevenly distributed largely over only three sides of the field. A cluster here might resemble a
horseshoe; alternatively, a festival where attendees are concentrated in pockets around different stages or performance areas presents a different pattern of user densities; similarly, a linear race that does not lap would comprise a locus of connected areas of similar density. Such a scan allows these individual events to be picked out from the surrounding noise.
Having identified the likely contributors, the system creates a macro-event to collate the individual streams so that a searching viewer can pick up on the full set of clips.
It should be noted that if the content is only ever live (i.e. no stream data is ever retained for later playback) , then the first contributions that permit the system to identify the event may not be seen by searching viewers if they are over and done with by this point. However, with a density algorithm like DBSCAN, even data points representing expired video sessions are still valuable in defining and bounding an evolving event .
Note however that this invention is not limited to streams that are live and live only. If the system supports the recording of live streams, then any stream identified as being part of an event can be watched whether currently live or not. Moreover, it would then be possible to recap an entire event using a montage of identified streams. This in itself is a unique possibility .
Viewers or rights holders searching for content can subscribe to an identified event and watch live streams as they become identified as belonging to it. In this way, the event is not constrained by the end of any one
particular contributor' s stream, and the viewer can choose to watch any such stream or not. Streams may also overlap of course.
A further technique to determine whether a particular contributor' s stream belongs to an event or not involves the use of the GPS, gyroscope and accelerometer of contributing users' devices.
While capturing video and location and location data, the consumer device should also record the direction and scope of the capture device's field of vision, preferably no less frequently than once per second .
By tying this information to the timed video data it is possible for the system, to further take into account the point or locus of points that may form the focus of a group's attention within a period of time.
For example, imagine a music festival that comprised four stages arranged around a central area, such as in Figure 14.
The central mass of festival goers would, using only the DBSCAN technique noted above, be considered a single cluster as shown in Figure 15 which picks out a small number of nearby contributors from the audience. Collating video clips and streams from the entire group though would create a confusing jumble of video taken of all four stages.
However, if we also take into account the field of view of each contributor, we can see that what looked like one cluster is actually multiple clusters as in Figure 16. Taking the same set of nearby contributors, we can see that one subset is likely looking at the stage on the left, while another is looking at the stage at the
top. Being able to further describe each contributor in this way allows the contributed video, or portion thereof, to be attributed to the correct event.
Using this as a fourth variable in the DBSCAN analysis, it is possible to differentiation between contributors and, in so doing, maintain event integrity by more accurately speculating as to which event any given contribution belongs.
Of course, the data will contain some noise, but this fourth variable being an angle is just as usable within DBSCAN as latitude and longitude.
It may also be useful to extract from the content two other types of real-time metadata: namely, what is heard being discussed and what is seen appearing in the video.
This audio-visual analysis may be valuable in two ways. Firstly, to provide a running set of keywords or tags that describe the content and which in turns aids both discoverability and the system' s ability to include or exclude a stream, or portion thereof, in an event - more on this later. Secondly, where if a rights holder has a declared interest in content arising from an event, knowing what is actually being captured allow for a more accurate assessment of whether a particular stream infringes.
On this second case, let's consider an example before looking in detail at how a rights holder might declare an interest in certain content.
At a football match a user broadcasts his friends enjoying a beer in the bar. While his geography, camera angle, and time might all suggest that his stream ought to be managed by the entity that owns the rights to the
football match, clearly the content of the stream indicates otherwise. An evaluation of the content of the stream would allow the system to make this assessment.
It is well known that automated semantic analysis of text yields a list of topics, entities, themes, moods, etc... the same analysis can be performed in real-time with live video by using a speech to text conversion. While the spoken word in structurally different from written language our aim here is not to deduce the nuances of what's being said, but rather to use a xbag of words' approach and highlight merely what is being mentioned.
Similarly, there exist services that analyse video content for the appearance of recognizable objects, traits, settings etc... that can be used again to create a live stream of metadata that describes the content.
So, armed with this wealth of as-it-happens metadata about a collection of live streams, how can we leverage it to solve our problems?
When analysing a collection of streams, location, time and direction all play a part, but so too does a coherence of subject matter. Going back to the music festival example shown above: one of the users capturing video might be recording herself speaking to camera. Being able to identity this characteristic of her stream would allow it to be separated out from those around her and excluded from the events focussed on the performance areas .
Similarly, within a group of users filming a car race some users may turn away from the action during recording, or may be tracking a particular car. Knowing what appears in the video, the system can pick out just those contributors who are looking at the race or even at
particular participants in the race.
Thinking now about the problem faced by rights holders, it is possible to see how the same approach can be used to pick out potentially infringing content from that which is likely not. However, we can go a little further .
Clearly, we do not wish to encumber a rights holder with the task of reviewing every live stream that pops up in the location at a given time and checking it from infringement. While the techniques described so far would slim this list down, there is still the potential for a large number of streams to check.
When a rights holder wishes to declare an interest in an event, we need a way for them to be able to describe the parameters of this event beforehand.
This includes, of course, the ability to define a region within which the event will take place as well as time limits for its start and end. However it also includes the ability to provide a list of tags that can be used to match against content.
By defining collections of tags that content must, may, oughtn't and mustn't contain in order to be included in the rights holders event, it is possible to see how much of the review process may be automated.
Priority Using Metadata
Whenever a collection of data streams must be presented together, for example for moderation, review, recommendation, selection or otherwise, given the possibility that the numbers involved might be high, it is important to be able to order the list in a meaningful way .
One way in which data streams may be ordered is according to the standing or reputation of the creator. Data streams contributed by users with a good reputation may be shown towards the end of the list as they likely require least attention from the moderators. Content from new or less reliable contributors deserves closer attention and should be ranked higher.
Similarly, content with only a marginal chance of being a good match to a user' s search should be shown below good quality matches.
In the same way, streams with a lower quality (resolution, frame rate etc..) should be shown below those of a higher grade.
Generally, more relevant content ought to be seen more easily by the viewer whatever their use case.
By allocating a prioritisation to data streams in this way, timely access to data streams in the moderation queue is provided for those that are more likely to need attention. Better quality viewing streams are provided. This allows the tagging of data streams to be improved and better described/annotated/tagged content gets more views .
Content that is moderated more quickly gets seen when fresher, getting the viewer closer to the real live moment .
Data streams may be prioritised in general whether or not they are also grouped. Where grouping is used, then prioritising methods may be provided within a group.
With reference to Figure 20, there is illustrated an associated process.
In step 142 data streams are received by the server. In step 144 the reputation scores associated with each
capture device providing the data streams are retrieved, and then in a step 146 the data streams are ordered accordingly, based on the reputation scores given.
In a step 148 the matching of the data streams to the groups is determined based on an assessment of metadata, and the data streams then appropriately reordered in step 150.
In a step 152 the quality of the data streams is assessed, and then in step 154 the data streams accordingly reordered.
Stream Display Order Prioritisation
Whenever a collection of content must be presented, consumed or processed together, whether for moderation, review, recommendation, selection or otherwise, given the possibility that the numbers involved might be high, it is important to be able to order the list in a meaningful way .
This may be advantageous so that processing resources (technical, human or otherwise) are expended on content deemed to have the highest priority first.
Prioritisation may be based on the metadata associated with the content. This metadata may include information about the originator, about the intended or actual audience of the content or the subject matter of the content itself.
The priority assigned to a given piece of content may not be a single value but may be a timeline of values that indicate the priority of the content at any moment throughout its duration.
The prioritisation technique depends on the prioritisation purpose. For example: a prioritisation
scheme configured to prioritise content for encoding might order the content items by popularity in order to encode the most popular items soonest. In another example, a prioritisation scheme may be configured to order content by likelihood that it contains banned material so that questionable items may be moderated quicker .
The dynamic nature of the prioritisation is important. Prioritisation operates on a queue of content items being processed/consumed/presented/etc... If that priority changes then the position in the queue changes. So content that had metadata that meant it was important to process soon (i.e. high priority) might change (e.g. the content creator may move the camera to something less interesting), causing its priority to fall.
Whenever a collection of streams must be presented together, whether for moderation, review, recommendation, selection or otherwise, given the possibility that the numbers involved might be high, it is important to be able to order the list in a meaningful way.
As has been discussed specifically for moderation, one way in which streams may be ordered is according to the standing or reputation of the creator. Streams contributed by users with a good reputation may be shown towards the end of the list as they likely require least attention from the moderators. Content from new or less reliable contributors deserves closer attention and should be ranked higher.
Similarly, content with only a marginal chance of being a good match to a user' s search should be shown below good quality matches.
In the same way, streams with a lower quality
(resolution, frame rate etc..) should be shown below those of a higher grade.
Generally, more relevant content ought to be seen more easily by the viewer whatever their use case.
Timely access to streams in the moderation queue that and more likely to need attention is achieved.
Better quality streams, better described/annotated/ tagged content gets more views is achieved.
Content that is moderated more quickly gets seen when fresher, getting the viewer closer to the real live moment .
Extra
With reference to Figure 23, there is illustrated an example as to the operations performed at the server on receipt of the metadata and data. This example assumes that the metadata and data have been transmitted as shown in Figure 2, but in general for the described operation the metadata associated with the data may have been generated in any way, and in particular the metadata may have been generated in the server 2.
As shown in Figure 12, the metadata is input to a control block 40, and the data is input to a group filter block 44. The group filter block 44 receives a control signal on line 42 from the control block 40.
The control block 40 generates the control signal on line 42 to route the data on line 34 to one of a plurality of groups, denoted in this example by a firs group Gl, a second group G2, and an nth group Gn . The control block 40 analyses the metadata for the data, and groups it according to rules.
Thus the data received on line 34b is output on line
461, 462 or 46n. The data may be output on one or more of the lines, and allocated to one or more group, in dependence on the control.
A further modification is shown in Figure 24. In the example of Figure 24 it is assumed that there are three groups Gl, G2, G3.
Each of the data streams is allocated to one of the three groups based on the control signal on line 42. In addition, once grouped, a further control signal 48, and having component parts 481, 482, 483, each associated with tone of the groups, is generated.
Each of the control signals 481, 482, 483 is used in a priority block for each group, denoted by 521, 522, 523 respectively, to apply a priority to each signal within the group. Thus signals can be output for each group in dependence on the priority, which is also derived from the metadata for the data.
With reference to the Figure 25 there is illustrated an exemplary process.
As the noted by step 60, a new content is created and received at the server 2. In a preferred example, the received content comprises data and associated metadata. In the described example, it is assumed that the metadata is transmitted to the server with the data itself, although in alternatives the metadata for the data may be generated within the server 2.
In step 62, the metadata for the content is submitted to a grouping service.
In step 64, the data itself is submitted to a content analysis service.
In step 66, the content analysis service analyses derives further metadata about the content itself. This step may be repeated several times as required.
In step 68 this additional metadata for the content is stored.
In step 70 the metadata for the content is aggregated. That is, the received metadata is aggregated together with the metadata generated by the content analysis service.
In step 72, whenever aggregate data changes, these changes are submitted to a content manager. The metadata is used in order to determine the matching up of content to a request, and therefore storing of any changes to aggregate data is important.
In step 74, a content matcher compares the new or changed metadata against the list of content requests which are pending.
In step 76, it is determined if any matches are identified. If no matches are identified, then in step 78 new content is awaited.
If a match is identified, then in step 80 the matches are added to a list of potential matches and recorded against each identified content request. The process then moves to step 78 to await further new content .
Content Driven Discovery
This relates to content finding users, rather than users going looking for content. This provides a solution to a problem of live content failing to find an audience while the content is still live.
Metadata is used about streams a viewer has watched
to build up a profile of what that user may wish to watch in future. Further, metadata generated about a new stream, as it happens, may be used to match it to viewers who may be interested in watching it.
This feature allows content to go in search of viewers rather than assuming viewers will find it. This may be especially important for live content that may not last long, or may be from a new contributor. In these cases, it may be essential that content creators begin to see that their work is getting the best possible exposure so that they are encouraged to keep creating.
As viewers watch and engage with content, the system may maintain a list of characteristics that it thinks each viewer likes. It might do this by using the tags, descriptions, locations, or events a stream belongs to or specific metadata from moments at which a viewer taps their screen (e.g.) to say they like a stream. By whatever means, the system is able to track each user's tastes and preferences over time.
When a new stream begins and metadata about it becomes available, the system will match it to viewers who have liked similar things before. By sending a notification to those users to come and watch the new stream, an audience can be mustered for any stream. This means that the potential audience for a stream is not just the contributor's followers or friends, as might be the case with existing platforms, but indeed anyone who might enjoy it - whoever and wherever they are.
Topics/tags/themes/creators that are particularly interesting to a viewer can be followed. This gives the system an explicit signal to curate future content for the viewer that matches the topic/tag/theme, or which
comes from a followed creator.
For a customer, for events that give rise to a large number of user-generated streams, this approach to discovery allows the customer to reach viewers and bring them to the content in a more meaningful, more personally relevant way. A viewer is not simply brought to the event's landing page but is, instead, brought to a stream that suits their tastes - potentially suiting their tastes exactly.
Being able to see the relevance of an event's content immediately in this way preferably encourages viewers to stay for longer and explore more of what is on offer
For a contributor end user, there is provided a quicker and easier technique to build an audience. It does not depend on having streamed before. As soon as the system can categorise and/or tag and/or analyse the content, viewers can be found to watch it.
For a viewer end user, there is a more natural discovery process. It is harder to miss interesting content. It is easier to find entirely fresh material from all-new creators.
Invite Contributors
Users who have created videos containing particular types of content (as indicated by the metadata associated with those items of content) can be enumerated so that other users can invite the best creators to help them shoot new content.
With this technique, more particularly, users who have contributed to a particular type of event in the past, or who have built up a reputation for quality
contributions on a given topic, may receive notifications to begin contributing new content to a new event if they are nearby or close to one of an event's points of interest .
This ties into a general concept of a content marketplace, in that content creators with a track record of capturing particular types of content may be invited to contribute to new events.
For a customer, this encourages organic growth of a group of users adding content of an event. By matching contributors with the content they have shown they are good at capturing, the overall quality of available material will improve. More contributors means likely more viewers, and a wider range of high quality submissions from which to choose material to feature or publish to a published stream. More users of all kinds involved with an event builds its profile and exposure to the wider public.
For a contributor end user, it is possible to create multi-camera content without creating a formal group of contributors first, drawing instead on those users who may wish to collaborate on an ad-hoc, casual basis. This creates a greater sense of community amongst content creators .
For a viewer end user, better quality content may be offered as a result of creators actively helping to improve/augment events with their own contributions.
Auto Geofence
Metadata may be used to group streams in a geographic area, and create a geofence around it. This is an example use-case of grouping but contains potentially
further interesting details about how to detect outliers that improve the geographic grouping approach.
Having identified one event (by whatever means) around which it is possible to mark a geofence, future streams appearing within that area and which have some other commonality with past streams associated with other events within the area may be marked as potentially being part of a new event.
For example, at a football stadium a rights holder may create a geofence for a match and collect all streams originating within it. At a later date, a new stream starts up and part of its metadata indicates that it is of a football match. The system may speculatively create a new event with a geofence matching the previous one in order to catch more relevant streams more quickly and therefore provide better coverage of the whole event.
There are certain locations in which people often gather together for one kind of event or other. This includes stadia, conference centres, transport hubs, shopping malls, festival areas and so on.
Past events, having been identified, can be used to improve the speed with which future ones are detected. This may not allow the system to work out the type of event (e.g. the difference between a football match on one day and a music concert on another day at the same stadium) , but the ability to mark streams as likely belonging to an event sooner rather than later may ensure that more streams get correctly identified and included sooner .
For a customer, streams belonging to the event, identified sooner, means that the customer gets to start showcasing their event to viewers quicker. Early streams
that might otherwise have been missed or excluded from the event will now be more likely to get correctly identified and tagged, meaning that the customer' s event pages - for example their event pages in a content delivery application - carry a more complete picture of the event .
For a contributor end user content is not overlooked that should be featured in an event owner' s content pages. It is possible for a contributor to get more viewers for their content when it becomes included within an event owner's pages. There is also more chance of kudos for a contributor at being featured by the event owner .
For a viewer end user, there are fewer missed moments from an event.
Automated interactivity
Watching video, even spontaneous live video, can often be a passive activity. However in some cases it is desirable or preferable to engage with an audience more directly through timely elements of interactivity. These engagement opportunities might appear during particular moments of content that a publishers wants to get feedback on (voting, rating, scoring etc..) ; a measure of overall level of engagement amongst the audience might be important etc.
Such interactivity is possible on a traditional television broadcast but comes with an overhead to manage, time, collate, report, and filter interactive elements.
It is therefore proposed to provide a modification to automate this. A number of possible techniques are
envisaged :
Firstly, an example is to provide voice recognition that allows the system to process the spoken word and pick out key phrases or structure that can be automatically converted into instant interactions. For example, hearing "What do you think of X?" might be converted into a pop-up on a viewer' s screen with the text and a text entry box for submissions.
Secondly, an example is to pre-confiure swipe/gestures to insert particular interactions at certain moments.
Thirdly, an example is to provide image/shape recognition so that whenever the camera sees, e.g., a can of a particular drink in the image, it shows viewers an overlay that they can tap to get a scannable QR code to redeem a can at a discount.
Use cases for this modification include: TV; level of engagement; donations (see later) ; or PPV.
To a customer, a value is added with an option to measure the level of engagement of the viewing audience which can be used to price advertisement spots, shape future creative decisions etc. Deeper analytical information about the audience is made available. Audiences are kept engaged for longer - rich, rewarding interactions will stop the attention of a viewer straying elsewhere .
For a contributor end user, a cameraman can spend more time shooting the content and less time curating interactions, and yet still be able to rely on the system to produce meaningful interactions that will generate revenue/intelligence/engagement .
For a viewer end user, interaction - particularly
with a live event - creates a more intimate experience of the content. Done well, it makes viewers feel more connected to the event and more likely to want to repeat it .
Event Montage
The metadata created for a stream can be used in an auto-zeitgeist/content summarisation process. The method may also be used in combination with creating an event montage. An event montage may include stitching video streams together to make a panoramic series of shots, or sequencing parts of contributing streams in order to, for example, follow a particular car/horse in a race; focus on one or more individuals in attendance; pick out clips that are about a particular aspect of the event and so on .
In this way, an event montage is effectively a condensed or summarised version of the whole or multiple captured video streams. It may also offer the additional flexibility of having access to overlapping pieces of captured video streams from which to choose.
The creation of an event montage may become accessible by generating metadata at the capture device and by providing metadata on a scene level. Further metadata may be generated or added to provide the system with information on multiple captured video streams of the same event and to create different event montages of the same event for the ability of the viewer to choose between alternative camera angles of any given moment. The method may add a new dimension for viewing streams that enables the viewers to engage and that gets the viewers interacting with the system more often.
It may offer a greater playback flexibility for the viewer who can watch during or after an event and see the whole thing. Viewers can capture an event themselves and in combination with different captured video streams from the same event, the viewer can collect footage he never shot himself but which shows him in attendance at the event .
The viewer may be able to dig into a moment and see it from every angle no matter who contributed the captured video streams.
During or after an event (if content is stored for later playback) viewers may ask the system to create a montage of streams from the event that tell a coherent story of it - a so-called event montage.
While watching a montage, a viewer may pause and ask the system to show alternative camera angles of any given moment .
For customers, this adds a new dimension to content that will engage viewers and get them interacting with the service more often.
For end user viewers, greater playback flexibility is provided - a viewer can watch during or after an event and see the whole thing. This is great for reviewing what the last event was like if you are thinking of going to the next one. A viewer can track themself at an event, and/or collect footage they did not shoot themself but which shows them in attendance at the show. This provides an ability to dig into a moment and see it from every angle no matter who shot it.
Protecting Rights Holders
Rights to content originating from certain public
events such as sports matches, tournaments, music concerts and festivals, amongst others, are typically held by broadcasters, publishers and other similar organisations .
The proliferation of live-streaming applications on consumer devices however has made it increasingly easy for people to capture and distribute content that impinges on these rights. While some users do this unwittingly given the simplicity, convenience and fun of doing so, others do so maliciously.
Whatever the intention, these streams will necessarily dilute the rights holders' own output and may even damage the brand of the publisher or of the event itself .
Inappropriate content, poor quality recordings, content that shows talent unfavourably or that reveals part or parts of the event not intended for public consumption would preferably be blocked.
On the other hand, high quality content that conveys the intended and/or positive aspect of the event would be extremely valuable to the rights holder. As such, the distribution of this kind of user-generated content ought to be permitted if it can be suitably branded and optionally wrapped with a sponsor or other advertising material before it reaches the Internet.
It is this use-case, amongst others, that an architecture such as Figure 1 can fulfil.
A first class of problem associated with rights holder content can be summarised as: how can the system, when being used by a consumer to capture content, identify this content as potentially infringing on someone's rights?
Having identified these streams, they need to be routed to the rights holder for them to decide whether a given stream should be blocked or should form part of their approved and branded user generated content (UGC) .
Clearly with popular events or events that take place over a long period of time, it would be impractical to require the rights holder to process these approvals manually. Not only would this impose a significant resource overhead on the rights holder, but it would take so long to moderate the queue that the content would no longer be live or fresh.
A second class of problem is therefore: how can each new stream be moderated automatically? What parameters might a rights holder wish to specify in order to define a stream as acceptable or not?
Of course, users are not required to use an application associated with the server 2 to capture live content, and they may use a different platform that does not check for potential rights infringements.
In these cases, it would be beneficial if the rights holder had some way of determining whether a suspect live stream, recording or clip originated from an approved stream or not. Unapproved streams can be flagged as infringing and a take-down request issued.
Watermarking a stream would provide a useful means of checking whether it is approved or not. However, such a method would need to be computationally lightweight so that it could be inserted by the handset (e.g. phone) itself, so that there is no requirement for any significant back-end processing to add it.
Making the watermark difficult to remove is not as important as making it difficult to fake. A malicious
user capable of identifying a watermark on a genuine stream should not be able to transpose this to an unapproved one. A visible watermark, while a useful marketing tool, would therefore not be sufficient.
A third class of problem is then: how can we identify content originating from outside the system as being potentially infringing? Further, can we create a watermarking system that is lightweight enough to be performed by, e.g., a smart phone, but computationally complex enough that it is difficult to fake so that, by checking for a watermark, potentially infringing external streams can be validated?
It may appear trivial that any external stream identified as potentially infringing a set of rights that must be policed is, by definition, unapproved and therefore infringing. However, this is not the case. Clips shared to social media, re-syndicated content, content captured from a curated stream etc. may all originate from approved content and yet appear to be external .
Protecting Rights Holders (Detail)
As described above, rights holders have particular problems with user generated live content.
Not only must they protect the brand they represent by ensuring that only good quality video content of the event is available to viewers, but they must also be able to satisfy themselves that their own direct offerings are neither diluted, undermined nor contradicted by user generated content that may make it out onto the internet.
There are enough ways for users to acquire, distribute and redistribute video content on the internet
to make the volume of possibly infringing content vast.
For example: a user might create a stream using the Meerkat or Periscope applications at an event; a user might stream a TV image of an event; a user might clip a video from YouTube or Snapchat; a user might record a moment on Instagram and post to Facebook or add a video on Twitter.
Each of these outlets - and there are many more - may provide the rights holder with many video clips which they suspect may infringe their rights.
There is no practical way that a rights holder can arrange for each of these suspect videos to be checked manually. There may be hundreds of streams that the rights holder has approved, of varying lengths, and a suspect clip may originate from any of them. Moreover, the clip may have been taken from a video some time earlier, making the task of finding the original source, and validating it as an approved clip, even more difficult .
The problem therefore is: how can a rights holder automatically examine a collection of videos, from whatever sources, and determine which, if any, were taken from or represent approved streams and which do not so that appropriate action can be taken against them?
A solution to this problem is now described.
Each stream is provided with a hidden watermark embedded within it. This watermark encodes information that identifies the device on which it was captured; the owner of the device; and a moment-by-moment reflection of the time at which the content was captured.
When these streams are moderated by the rights holder of an event, three actions can be taken. The
rights holder may accept that the content is of their event and approve it. The rights holder may accept that it is part of their event and reject it. The rights holder may indicate that it does not belong to their event at all. The watermark details of approved and rejected streams are added to approved/rej ected lists for that event. These lists therefore describe which devices, which users and between which times content was approved or rejected for any given event.
Streams that the rights holder did not consider were part of their event are allowed - for example, the location of a stream indicated that it might be part of the event, but in fact it was not.
In this way, a system may be constructed which receives suspect videos, of any duration and from whatever source - for example, clips shared on Facebook, streams republished on other platforms, downloadable videos on file sharing sites etc. - such that they may be checked for the presence of a watermark.
Where a watermark is discovered, the device, user and timestamp are decoded for each moment and checked against the complete list of event approval and rejection lists for all rights holders.
Suspect videos that pass this test can therefore be shown to have originated from an approved stream and no action needs to be taken. Similarly, a watermark found in neither set of lists likewise passes the test since it did not fall within any known rights holders' events.
Content with no watermark is more difficult to process. Additional metadata about the stream needs to be considered if it is available.
Social media posts often carry geolocation data
about where an update originated. This can be used by the system to see if the video may have been created (e.g. through Facebook, Twitter, Instagram etc in a location that fell within the spatial and time boundaries of a rights holder's event. If it did, then the suspect video is passed through any automated moderation process defined for that event in order to improve the level of confidence in the assessment.
At this point, if the video is felt to have originated from a protected event, a take-down notice may be issued to the publisher since any approved content must have a watermark which appears in the approved list.
This system can therefore consume any seemingly live stream, a delayed stream, a video-on-demand (VOD) clip, or a VOD asset, analyse it and determine whether it was itself or is part of an approved stream.
A separate problem occurs with users who create streams that cover an entire event. For example: someone who broadcasts his point of view for the whole football match; someone who streams the full set of songs from a band's performance at a concert.
Users who engage in this are typically less concerned with sharing an interesting moment, than providing viewers will an unofficial route into premium content .
While a rights holder may have approved the content from such contributors because it falls within their quality and acceptability criteria, a further refinement would allow them to bring streams to a close that last longer than a preset duration.
Contributors falling into this category can start another stream, but they should be locked out of doing so
for a period if their location suggests they are still at the event .
This allows the body of user generated content to be populated with genuinely interesting vignettes, samples of life, rather than bulk uploads of the entire event.
There are other examples where the rights holder may exert some control over how content is captured at their events. This particular example looks at limiting the duration of such a stream, but the rights holder may also request the contributor to switch to a higher bit- rate/quality or similar for example.
The other issue discussed as a problem is screen- capture streaming. This involves someone using their capture device to stream something that is shown on a traditional television. Currently, this is the biggest problem experienced by rights holders and live-streaming applications .
There are two possible approaches here.
Firstly, the live streaming applications running on the capture devices may include software that is able to detect the presence of a TV screen in the image.
This is achieved by monitoring the captured video, locally on the capture device itself, and using known algorithms to detect the presence of rectangular features in the field of view.
This detection can be limited to rectangles with a particular aspect ratio (such as 4:3 and 16:9) so that rectangles similarly shaped to TV screens may be picked out. Algorithms can be chosen which are able to detect rectangles irrespective of their orientation with respect to the viewer, so a rectangle seen side on would be detected as readily as one seen face on.
For each identified rectangle, the capture device transforms the contents so that they are mapped to a proper rectangle thereby eliminating any skewing or other deformation arising from the orientation of the TV to the viewer. The result is a rectangle of the given aspect ratio filled with the transformed content from the field of view, as shown in Figure 21.
These one or more transformed parts of the captured video may then be further analysed to determine if they do indeed contain TV-style video. Contents tracked from one captured frame to the next allows a "video stream" for each suspected TV to be constructed from which a video fingerprint can be calculated and submitted to a server. The server may use this fingerprint to determine if the rectangle does indeed contain a TV signal and, if so, whether it is acceptable to retransmit this as part of another stream. If not, it may instruct the capture device to blank it out, as shown in Figure 22.
In Figure 22, it can be seen that the BBC News channel has been detected within the capture stream and the server has instructed the capture device to edit it out. This blanking out would be achieved by the capture device so that no stream it produced and provides to the live streaming server contains any protected content.
Viewers, seeing this, may be further offered the option to pay for access to the protected part of the video, whereupon they would be taken to an approved source of the content, or offered direct access to free- to-air but approved versions instead.
This approach depends on having detection software in the capture device itself - such a solution may be beneficial to existing live streaming platforms wishing
to proactively prevent their users abusing rights holders content .
A second approach uses a similar detection process but analyses streams that are already being published.
Such an approach might be used by a rights holder to monitor content found on the internet and pick out those streams, on-demand assets or clips which feature TV footage of their content. Video Challenge
Video challenge depends upon grouping streams by similarity .
In summary, a collection of media assets, whether live streams, pre-recorded or otherwise, or any part of either, or any combination thereof, grouped in dependence on one or more items of common metadata may be provided to a function by which these assets are rated in comparison to each other.
For the avoidance of doubt, a media asset is an item of content.
Common metadata here is taken to mean there is a non-trivial intersection between the sets of metadata associated with each stream.
The rating function may be a computer program that evaluates the assets or the assets in combination with their associated metadata in order to determine a relative ranking from which a comparative rating may be derived .
The rating function may be a system that presents the assets to one or more users for them to provide comparative ratings.
When presenting assets to a user, the system may
present all assets at once so the user can provide comparative ratings to each asset while having visibility of all the others.
Alternatively, the system may present two assets from the collection and prompt the user to choose the asset they would rate lowest. The lowest asset is then replaced, in the user' s display, with another asset from the collection. The user is then prompted to repeat the rating process.
Having discovered a collection of possible streams to watch, perhaps grouped by theme for example, a viewer may be presented with two streams playing side by side. At any moment, the viewer may choose to dismiss one as less interesting than the other. Its place is then taken by the next live stream in the list.
In this way, the viewer is exposed to more content than they would otherwise be; contributors get exposure to a wider audience; good content rises to the top along with their creators' reputation; and the system learns more about what a given viewer does and does not like.
A customer may wish to push the best content to their published stream. While it may be difficult to sort through this content manually, this video challenge feature adds a degree of crowd-sourced quality checking to what may be a vast number of streams. This can be used, along with prioritisation techniques described here elsewhere, to sort the list of possible streams and allow the customer to more easily pick out the streams they wish to feature
A greater understanding of the audience is provided, and the ability to improve the quality of future recommendations is provided.
For the contributor end user, content gets exposure to a wider audience with greater opportunities to receive and act on feedback from those viewers. Better exposure increases the incentive to create more content. More timely feedback allows the contributor to truly respond to the desires of their audience and create a genuinely bi-directional experience. This helps build an audience beyond a contributors followers and friends.
For the viewer end user, there is provided a more engaging content discovery process allowing the viewer to drill down to that perfect live moment with minimal effort .
Content Marketplace
Requests for content, or "calls", might be requests to film specific people or event, but they are not necessarily be limited to requests to film specific people. They could be requests to:
1. Film a clip of the sunset over the pyramids while looking out of town, so there is only the desert and the pyramids .
2. Film a "how-to" clip that shows how to correctly hang a painting.
3. Film a shopping trip to Macy' s .
The "put" might be fulfilled automatically by the system when it sees a stream that happens to match the requirements, or it may be fulfilled explicitly by someone picking up the order and posting a scheduled event that will feature the requested content.
Users can up or down vote each other based on whether the live stream matched what they asked for or not .
Finding content to watch in a live-streaming environment can be challenging. Indeed, this description includes several concepts seeking to address various aspects of a discovery process for ephemeral content, such as ad-hoc live streaming, which presents unique difficulties .
In this discussion the concept is turned on its head, to address the problem by allowing viewers to request the creation of content. Thus a content marketplace is created.
When searching for content or compiling a playlist of live and near-live streams, not every request can be fulfilled successfully. Some themes, terms or combination of criteria may not yield any results at all; others may exist but be out of date or not be a sufficiently close match .
Traditionally, these details of these failed searches and fulfilment requests have simply been ditched. However, it seems foolish to discard such valuable intelligence about what the audience actually wants to see on the platform.
The problem therefore is: how can we use viewers' search and fulfilment requests to drive content generation throughout the platform as a whole?
As part of the discovery function, the viewer may describe the content they are looking for using free-text which in turn is broken down semantically, by the system, into a series of key words and phrases.
These tags are used to search the current collection of live streams for matches that might fulfil the viewer's needs. Live content is similarly tagged, in real time, to describe what's going on in the video using an
automated process that analyses both the video and audio aspects of the material - this process is covered elsewhere .
If no good quality matches are found, or if the viewer does not quite see what they want, they may use these tags to create a request for new content. A viewer may also skip straight to this request stage if desired.
Such a request contains at least a list of tags that describe the required content but may also include a message with more detailed requirements; a deadline for content and notes on intended usage. The requester may also offer a fee, or request particular sets of reuse rights. Likewise, they may describe, using tags, things which should not appear in the content.
Requests, or "calls" for content, to borrow a financial trading term, are entered into an online catalogue that other users may browse. This catalogue is indexed and categorised based on the tags, fees, rights requested and so on.
Requests may be fulfilled in one of two ways.
A contributing user may claim a request by offering to provide the live stream as described. During the stream, the system performs its usual audio-visual (AV) analysis of the content and checks off those tags which should or should not appear in the stream.
This contributor may also, by arrangement with the requester, submit pre-recorded content in cases where a request cannot be fulfilled at a time that is convenient for the requester to watch live (e.g. xI'm in London but want a video of the Sydney Opera House at daybreak' ) .
Requesters may accept or reject a particular submission. Rejections mean that the request stays open
for another user to attempt to fulfil it, but also that any rights remain with the contributor.
The second fulfilment method is an automatic one. As other users engage with the platform, the AV analysis may, from time-to-time, match a stream with a requester's needs. These unintentional matches are made available to the requester to review, either as they happen or later.
If the requester believes a stream meets their requirements, they can mark the request as fulfilled or, where a payment or rights are required, the same terms are offered to the contributor who may accept or reject them closing or leaving the request open respectively.
So, rather than this being a marketplace of sellers, this is a market of buyers looking for contributors to create suitably crafted output.
Requesters may similarly create an open request for content that alerts them any time interesting content becomes available. This feature, while using much of the same logic, delivers more of an active search function that allows content to find viewers in real time.
This last point is worth emphasis.
When dealing with live content where contributors may be broadcasting for only a short time and where they lack the time or convenience to describe accurately what they are filming, passive discovery is unlikely to be successful .
The chances of a viewer being on the system and searching for something at the same moment as a contributor lights up their camera and broadcasts is slim.
What is better is to allow viewers to articulate their needs (or deduce them) , and let the system actively
monitor for suitable content and bring the user to the app as it finds matches.
Leaving aside the purchasing/rights marketplace feature for a moment, the exchange-like function the system provides by matching requesters with contributors where their needs and output overlap is considered.
As has been described above, requests, or active searches, may be matched explicitly by contributors seeking to fulfil the request, or casually by contributed streams that happen to match.
In reviewing the content, howsoever received, the viewer may up or down vote the stream based on how well they perceive the content matches the search. These votes affect that contributor' s standing or reputation score as a provider of material categorised in each of the tags present .
These scores affect direct contributors and casual match contributors differently. A direct contributor's standing will be affected on all tags that were meant or not meant to appear in the content while a casual contributor' s standing is only affected for tags that do actually appear in their content and the request.
Reputation scores backed by a body of work will be vital for contributors hoping to engage with paid-for requests. The marketplace concept can now be returned to.
One way in which live content has become increasingly valuable, though it is perishable still, is with traditional broadcasters looking for exclusive footage of events. News, sports, cultural, music, demonstrations, etc. are all situations from which broadcasters have a need to source content. This can be expensive to do using in-house resources, but cost-
effective by crowd-sourcing it.
The content delivery technique described herein, with its content marketplace, is able to fulfil those needs. As mentioned above, requesters may configure active searches/requests for content and wait for user- generated content to be matched.
Users willing to enter into transactions that allow their content to be sold can find their streams being picked up as they happen by, e.g., news organisations or similar and fed directly to websites and TV stations as events unfold.
This automated matching of required tags in a search with a broadcaster's needs, together with the ability to specify a reputation threshold that a matched contributor must meet or exceed, allows the organisation to source high quality exclusive content at a fraction of the cost of traditional outside broadcast facilities.
In a supplemental aspect, a rights holder may offer a bounty to members of the public in exchange for capturing content of a poorly covered part of their event using live statistics.
Paywall
A paywall provides for automated pricing based on metadata. An extended ability to generate and process metadata at a capture device may also be used to offer more opportunities to monetise captured video streams.
Beyond the possibility of advertising added in and around live streams, there are currently very few monetisation opportunities with so-called user generated live content. In part, this is due to the perceived low quality of much of the output or, where quality is
assured, an uncertainty about how the audience will eventually come to value the medium. Only few platforms and systems offer professional producers meaningful tools for generating revenue from their output.
The described method and system may address that gap, by the ability to generate and provide metadata also in consideration of the monetisation process that the system provider intends to accomplish.
System providers or publishers of the content wishing to monetize the viewing streams may configure open-access periods during their broadcasts (e.g. the first 2 minutes are free to all; certain subsequent portions are open; etc...), while setting premiums on the remainder. Viewers may gain access to the content on a pay-per-view basis or via a subscriptions. Metadata that allows for the distinction can be used to control the access .
A monetization process may also allow a subscription to a particular content creator's output, or to a cross- contributor selection organised by theme. For example: a subscription to xcat videos' would provide access to any premium cat video output from any participating contributors amongst whom proceeds would be distributed based on viewing figures. Metadata that are generated at the capture device may be used to support the organization into such themes as well as to consider contributors when allowing access to viewing streams.
Where a user of a capture device allows its captured video streams to be viewed on demand, similar rules may apply with some archival videos being offered for free, while others carry a premium.
As above, a subscription method that involves a
matching of criteria to which viewers have access with metadata of captured video streams may support a dedicated offering mechanism, and with such a method contributors may control to whom their premium content is offered. Beyond traditional restriction options such as location, device-type and age, contributors may consequently also limit access of the captured video streams by user group.
For example, a university offers a range of online courses where participants may watch lectures live using the described system. The university does not wish to encourage full-time students to skip attendance and so limits the availability accordingly.
Contributors may control to whom their premium content is offered. Beyond traditional restriction options such as location, device-type and age, contributors may also limit access by user-group. For example, a university offers a range of online courses where participants may watch lectures live using the described content delivery system. The university does not wish to encourage full-time students to skip attendance and so limits the availability accordingly.
A similar example can be understood for streams arising from a music festival where organisers wish to allow those not attending to watch, but do not wish to stymy the flow of attendees from one performance area to another .
Conversely, contributors may offer their content for sale directly thereby operating a market for requests for content and submissions against the same.
Contributors may set thresholds for the sale of their content to broadcasters who may create automatic
bidding rules for particular categories of content (akin to the ad market Google operates with AdWords)
Donations
Donations allow a publisher to solicit donations from viewers as an alternative or supplemental revenue stream.
Church or voluntary groups, charities, schools etc. may create content that includes a fundraising drive. Donation buttons that launch a simple payment process can be placed over the live stream throughout an event or, by using a director/ or pro-publisher application, placed on screen manually at key moments or even automatically in response to certain phrases being said by the speaker.
To a customer, additional revenue stream is provided without the hassle of having to create a donation funnel - a one-tap configuration sets up the donation option.
To an end user, a more palatable alternative to advertising is provided, and the ability to donate to a cause without needing to switch to another application and losing track of what is going on in the event.
Live Statistics
Live statistics uses metadata that describes what happens in a section of content to create performance metrics. The system and described method may also be used to increase the quality and the usage of statistics of content consumption. Metadata that is generated at the capture device may provide considerable information on the captured video stream without increasing the load at the server side to an unaffordable level. Metadata on scene level may be used to infer statistics about how the
viewers of the viewing streams engaged with the video stream far beyond pure information, e.g. about how many viewers consumed the stream or for how long had a viewer watched a viewing stream.
An aspect of the method may include a matching of the period and corresponding time stamp to which a viewer watched a viewing stream with the associated metadata within the time stamp of the video.
The method may include the generation of metadata which allows for separating the captured video stream into chapters of different topics. E.g. the generation of metadata for a captured video stream recoding a public political debate can be used to find and provide headlines of topics of what the debate is about. Consequently the metadata of the captured video stream representing a clear correlation between time stamp and topic may be compared to statistics of a viewer' s consumption behavior. Further interpretation of what viewers find interesting - and what not - can be accomplished.
This may also be used in an aspect in which the live streams are captured and metadata is generated in parallel to the stream. Real-time analysis of the viewer' s consumption behavior in consideration of the generated metadata during a speech may be accomplished to provide real-time feedback of the viewer' s behavior to the speaker with regards to what the speaker talked about .
Statistics, or better analytics, are applicable to many questions and aspects of a video event and in particular to live events. In the following description real-time analytics or real time stats are discussed,
meaning when particular selected measures are shown to, e.g., a speaker in real-time during a speech.
As a consequence of this, this discussion is about anything that becomes value, as it is fed back directly during the speech. This discussion is not about analytics and statistics which conclusion is good being analyzed later on, e.g. for any improvement for the next event (i.e. historical statistics).
In addition this discussion is about statistics for which statements are clear and recognizable immediately, as the speaker has not the possibility to "analyze the analytics" during his speech, not to react on aspects that need complex consideration to find a correct meaning related to the current speech.
Lastly this discussion is about relevance and not about quality. Relevance means that the real-time statistics should help to answer very simple and clear questions from the user of the statistics (i.e. the speaker) , so that it can help the speaker to react immediately.
This might split it into two possible use cases.
In a first use case, there might be very few questions that are clearly relevant for almost any kind of events where real-time statistics could help. That is, for instance, how many users are online and perhaps more importantly what is the current variation?
Such information can be displayed clearly and easily. It can indicate for instance at a start of an event that viewers slowly log into the event and start looking. There might be also some typical variation on the number of viewers due to a certain percentage that just enters a web page but leave it again after some
seconds .
But the real variation is much more valuable as it could indicate that the audience is reacting on the event in particular and not as a "standard behavior" of a typical live stream viewer on the internet. It could, e.g., show that the sermon of a priest is really appealing indicated by a longer positive value on the variation, and vice versa a boring or even displeasing sermon would most likely lead to a longer negative value of the variation.
The issue here is that the live statistics are used to answer particular questions and show the answer in the easiest depicted way to illustrate the statement fast and clear .
Moreover, while the number of viewers may remain steady there may still be a degree of churn with as many new users joining as old users leaving. Providing an indication of this would give the speaker valuable insight into the "stickiness" of his audience.
In a second use case there might be also questions that are relevant for a specific type of event or a specific event within a series. Having said that a speaker might want to "prepare" his real-time feedback screen to get the immediate feedback on such a concrete question during that individual speech.
For instance someone plans to preach about a theme that is in particular interesting for a certain portion of his audience and he would like to know in real-time whether he gets encouragement or perhaps refusal.
Anyway, the issue this addresses is something that needs a certain filtering, election or perhaps processing of the actual real-time data that the system gathers and
can provide.
Think about an example of talking about euthanasia in a religious manner. That is definitely something that could create strong emotions from a certain portion of the viewer, i.e. older people.
So showing the current variation of viewers that log in and out would rather overlay the answer to that question as apparently it would just indicate the audiences' interest in general, rather than the feedback from the perhaps most emotionally reacting portion of the audience .
So the focussing of the variation of that portion would be more relevant for the question how much refusal the priest will meet or whether he gets encouragement on his statements.
A further opportunity turns the live analytics question around somewhat. Thinking not about the user capturing the content but about a rights holder to whom a collection of streams has been assigned by whatever means .
Such an organisation may wish to see, in real time, how their event is being covered by members of the public. Imagine a race track (horses, cars, etc..) or a golf course. Action may be concentrated in particular areas but there may, at times, be places where something unexpected is developing but which is not being covered.
Rights holders, in these cases, may decide to offer a "bounty" to contributors who help bring in additional coverage of a part of their event.
Real-time analytics, presented in a simple way that seek to answer well defined questions (such as the level of coverage an event currently enjoys across its extent)
would be of value to rights holders and event managers alike .
The description of the real-time statistics shows that a solution for that idea needs in a first instance real-time data. For example, the number of viewers and particulars of current viewers is one good example.
To ensure the relevance aspect, the speaker should be able to input his current interest i.e. his question that he wants to get feedback during the speech. That means in a standard solution a pull-down menu of accessible real-time feedback could be used to let the speaker decide what to get. In the discussed example "the real-time variation of the audience".
In a more sophisticated solution a question could be also selected by a matrix of questions and subcategories for which data points are accessible. In the discussed example it would need, for instance, the age of the viewer gathered from profiles of the subscribed audience. A subcategory would be then "with the age of 60 and above".
Finally an easy and/or clear depiction of the answer is mandatory. In the discussed examples there may be a permanently moving bar indicating whether there is an increase (green) or decrease (red) . Or alternatively a pie that is getting smaller or greater whereas the pie itself indicates the total number of viewer and the dynamic in size and color (again green for a growing number and red for a shrinking number) shows the variation .
One can think further to provide easy little maps - for example four smileys that represents certain portion of the audience (age under 20; 20-40; 40-60; and >60),
and the size and color shows again variation and total number of the respective group. In addition the smiley' s facial expression indicates a third information which is gathered by an additional data beacon.
The real point is that in dependence of the accessible data points, there are some predefined information usable to give relevant feedback to a speaker during his speech, but also with more data points multiple combinations come up that could help to answer more sophisticated questions in real-time.
With reference to Figures 26 and 27 there is further illustrated examples in relation to augmenting data.
As illustrated in Figure 26, a capture device 12a is connected to a voice recognition module 428, an edit module 430, and a so-called director 432.
A set of captured images may be provided by the capture device 12a to the voice recognition module 428, in order to edit the video stream associated with the captured images. A video stream may then be transmitted, for example to the video streaming server, which has been appropriately edited.
As illustrated in Figure 27, a capture device 12a is connected to an interface to cloud services 438, an edit module 442, and a director module 444. Cloud translation services are provided as denoted by reference numeral 440.
In the example of Figure 27 the captured images are sent to cloud services for further editing, before being returned to the capture device, and then the capture device providing a video stream which is appropriately adapted .
These arrangements may allow the video stream
provided by the capture device to be augmented to include, for example, subtitles in different languages. The capture device may provide multiple video streams which have been augmented in different ways, in combination with the raw video stream.
With reference to Figures 28 to 31 there is further illustrated examples with reference to grouping.
With respect to Figure 28, there is illustrated an arrangement in which incoming video streams are grouped at a video streaming server in accordance with the recognition of fingerprints contained within the video streams .
A capture device 358 includes a capture audio sample module 362, a video content module 360, a wireless transmission module 366, and a clock 364.
A server contains first and second buffers 378a and 378b, first and second fingerprint recognition modules 380a and 380b, first and second caches 382a and 382b, and a fingerprint grouping module 382.
The capture device and the server are connected via a network 372, the network receiving a stream on line 368 from the capture deice 358, and generating a stream A and a steam B on lines 374 and 376 to the server. The network also receives further streams as represented by line 370.
With reference to Figure 29 there is illustrated an arrangement in which various modules are utilised in order to group incoming video streams. In Figure 29 a server is configured to include a determine location module 400, a determine time module 402, a determine POV module 404, a determine other tags/characteristics module 406, an allocate stream to group module 408, a stream multiplexing module 412, and moderator modules 414 and
416.
With an arrangement such as Figure 29, incoming video streams may be grouped, for example, in dependence upon the time of the content of the video stream, the location of the capture device associated with the video stream, the point of view of the capture device of the video stream, or other tags/characteristics of the video stream associated with the device, or other metadata associated with a video stream.
A grouped video stream may be provided for viewing, or may be provided to a moderator or other element for further processing.
The video streaming server may receive a control signal in order to define a group, and the control signal may be received from a viewing device or from an event organiser .
Reference is now made to Figure 30, in which there is illustrated an interface 430, a memory 432, a comparator 434, an address module 436, and an interface 438.
Figure 30 illustrates an example in which a viewing device provides the content server with metadata which is stored in a memory of the content server, which metadata illustrates tags or characteristics which the viewing device is interested in receiving video streams for, if the video streams are associated with those tags or characteristics. A receive video stream is compared with the metadata stored in the memory, and if a match is identified then an address module attaches the appropriate address to the video stream, and provides it to an interface for delivery to the viewing device.
Reference is now made to Figure 31, in which there
is illustrated an interface 440, a memory 444, a comparator 442, an event allocate module 446, and an interface 448.
Figure 31 illustrates a similar example, in which the metadata history associated with viewing devices is stored in a memory. Thus a viewing device may not have indicated to the content server that they particularly want to receive video streams at this instant. Nevertheless the content server may compare the metadata of the incoming video stream with metadata associated with historical requests from viewing devices, and accordingly allocate the video stream to a viewing device, or to an event, in dependence on the comparison.
With reference to Figure 32 there is further illustrated an examples with reference to priority.
With reference to Figure 32, there is provided a module to determine a reputation of a contributor 524, a module to determine quality of a stream 526, a module to determine compatibility of a search 528, a rank module 536, an edit module 538, a moderator module 542, a review module 544, a recommendation module 546, and a selection module 548.
As illustrated in Figure 32, an incoming video stream 522 may be processed by a streaming server in order to rank the video stream, and then allocated to one of various modules to provide streams 550, 560, 580, 582.
The ranking may be based upon the reputation of a contributor associated with a capture device, the determined quality of the video stream, or the determined compatibility of the video stream to search parameters provided by a search criteria. The incoming video streams are ranked, optionally edited, before being sent
out to a viewing device, optionally via a moderator, a review process, a recommendation process or a selection process .
In this description, when reference is made to content, this includes any items or data and not just video content.
All the examples and embodiments described herein may be implemented as processed in software. When implemented as processes in software, the processes (or methods) may be provided as executable code which, when run on a device having computer capability, implements a process or method as described. The execute code may be stored on a computer device, or may be stored on a memory and may be connected to or downloaded to a computer device.
Examples and embodiments are described herein, and any part of any example or embodiment may be combined with any part of any other example or embodiment. Parts of example are embodiments are not limited to being implemented in combination with a part of any other example or embodiment described. Features described are not limited to being only in the combination as presented .
Claims
1. A system for providing streaming services, comprising :
a plurality of capture devices, each for capturing data and providing a captured data stream; and
a server, for receiving the plurality of captured data streams;
wherein each capture device is configured to generate metadata for the captured data, and transmit said metadata to the server.
2. The system of claim 1 wherein the metadata is transmitted to the server in the captured data stream.
3. The system of claim 1 wherein the metadata is transmitted to the server on one or more metadata streams additional to the captured data stream.
4. The system of claim 2 or claim 3 wherein the metadata is synchronised with the associated captured data for transmission from the capture device.
5. The system of any one of claims 1 to 4 wherein the metadata and the captured data are associated with a common time line, wherein the server determines the time line and synchronises the captured data and the metadata based on the time line.
The system of claim 5 wherein the server determines timeline from the captured data stream.
7. The system of any one of claim 5 or claim 6 when dependent on claim 3 or 4 wherein the server determines the time line from the one or more metadata streams.
8. The system of any preceding claim wherein the capture devices communicate with a further device or server in order to obtain additional data for the captured data stream.
9. The system of claim 8 wherein the capture device communicates with a further device or server to transcribe the captured data stream.
10. The system of claim 8 or claim 9 wherein the further device or server provides a geo-location for the capture device .
11. The system of any preceding claim further comprising a viewing device, the server outputting a viewing stream to the viewing device.
12. The system of any claim 11, wherein one or more captured data streams are routed to the viewing device in dependence on the metadata associated with the captured data streams.
13. The system of claim 11 or claim 12 wherein the viewing device is associated with a moderator of content or an editor of content.
14. A system for providing streaming services, comprising :
a plurality of capture devices, each for capturing data and providing a captured data stream; and
a server, for receiving the plurality of captured data streams and outputting at least one output stream, wherein the server is configured to dynamically group the captured data streams in dependence on metadata associated with the captured data streams.
15. The system of claim 14 wherein the metadata for a captured data stream is received from the associated capture device.
16. The system of claim 14 wherein the metadata for a captured data stream is determined at the server.
17. The system of any one of claims 14 to 16 wherein the server is configured to dynamically group the streams in dependence on a current definition of a group.
18. The system of any one of claims 14 to 17 wherein the server is configured to dynamically group the streams in dependence on the current metadata of the data streams.
19. The system of any one of claims 14 to 18 wherein the captured data streams allocated to a group are output to an editing device.
20. The system of claim 19 wherein the editing device controls the output data stream from the server for the group.
21. The system of any one of claims 14 to 20 wherein the captured data streams are prioritised in dependence on the metadata associated with the captured data streams of the group.
22. The system of claim 21 wherein the captured data streams within each group are prioritised.
23. The system of any one of claim 14 to 22 wherein control data to be applied to a captured data stream is provided from an external source.
24. The system of claim 23 wherein the control data comprises a set of rules.
25. The system of claim 24, wherein the set of rules define a group.
26. The system of claim 24 or claim 25, wherein the set of rules is used to allocate data to a group.
27. A system for providing streaming services, comprising :
a plurality of capture devices, each for capturing data and providing a captured data stream; and
a server, for receiving the plurality of captured data streams and outputting at least one output stream, wherein the server is configured to dynamically prioritise the captured data streams in dependence on metadata associated with the captured data streams.
28. The system of claim 27 wherein the server is configured to dynamically prioritise the streams in dependence on a current definition for prioritisation .
29. The system of claim 27 or claim 28 wherein the server is configured to dynamically prioritise a data stream in dependence on the current metadata of that data stream.
30. The system of any one of claims 27 to 29 wherein the captured data streams are grouped, and then within each group the data streams are prioritised in dependence on the metadata associated with the captured data streams.
31. The system of any one of claims 27 to 30 wherein the priority of a captured data stream is used to determine the output of that captured data stream from the server.
32. The system of any one of claims 27 to 31 wherein the metadata for a captured data stream additionally includes a prioritisation score.
33. The system of claim 32 wherein the prioritisation score for a captured data stream is dynamic.
34. The system of claim 32 or claim 33 wherein the prioritisation score is based on a reputation of a user associated with the capture device.
35. The system of any one of claims 32 to 34 wherein the metadata for a captured data stream additionally includes
feedback data from a device receiving the output stream from the server.
36. The system of any one of claims 32 to 35, the device having made a request for content from the server, the prioritisation score being indicative of the relevance of the captured data stream to that request.
37. The system of claim 35 or claim 36 wherein the feedback is used to adjust a prioritisation score of the captured device data stream.
38. The system of any one of claims 35 to 37 wherein the prioritisation score is a viewer rating.
39. The system of any one of claims 27 to 38 wherein the metadata for a captured data stream additionally includes feedforward data based on the capture device from which the captured data stream is provided.
40. The system of claim 39 wherein the feedforward data is used to adjust a prioritisation score of the captured data stream.
41. The system of claim 40 wherein the prioritisation score is a capture device rating.
42. The system of any one of claims 14 to 41 wherein the captured data streams are edited in dependence on their priority.
43. The system of claim 42 wherein the server edits captured data stream based on a set of rules.
44. The system of claim 43, wherein the set of rules apply to a group.
45. The system of claim 43 or 44, wherein the set of rules is used to allocate a data stream to a group in dependence on the metadata of the data stream.
46. The system of any one of claims 42 to 45 wherein the server additionally edits captured data streams in dependence on a received control signal.
47. The system of any one of claims 15 to 46 wherein the metadata for a captured data stream is received from the associated capture device.
48. The system of any one of claims 15 to 46 wherein the metadata for a captured data stream is determined at the server .
49. The system of any one of claims 15 to 48 wherein the server edits the captured data stream by applying an overlay to the captured data stream.
50. The system of any one of claims 15 to 49 wherein the server groups the captured data streams in dependence on the metadata associated with the captured data streams.
51. The system of claim 50 wherein the server edits the captured data stream by applying an overlay to all captured data streams allocated to a given group.
52. The system of claim 49 or claim 51 wherein the applied overlay indicates a branding.
53. The system of claim 49 or claim 51 wherein the applied overlay indicates a rights assignment of the content of the captured data stream.
54. A method for providing streaming services, comprising :
capturing data and providing a captured data stream at each of a plurality of capture devices, ; and
receiving the plurality of captured data streams at a server;
wherein each capture device is configured to generate metadata for the captured data, and transmit said metadata to the server.
55. The method of claim 54 further comprising transmitting the metadata to the server in the captured data stream.
56. The method of claim 54 further comprising transmitting the metadata to the server on one or more metadata streams additional to the captured data stream.
57. The method of claim 55 or claim 56 further comprising synchronising the metadata with the associated captured data for transmission from the capture device.
58. The method of any one of claims 54 to 57 wherein the metadata and the captured data are associated with a common time line, wherein the server determines the time line and synchronises the captured data and the metadata based on the time line.
59. The method of claim 58 wherein the server determines the timeline from the captured data stream.
60. The method of any one of claim 58 or claim 59 when dependent on claim 54 or 55 further comprising determining the time line from the one or more metadata streams .
61. The method of any one of claims 54 to 60 further comprising communicating with a further device or server in order to obtain additional data for the captured data stream.
62. The method of claim 61 further comprising communicating with a further device or server to transcribe the captured data stream.
63. The method of claim 61 or claim 62 further comprising providing a geo-location for the capture device .
64. The method of any one of claims 54 to 63 further comprising a viewing device, the server outputting a viewing stream to the viewing device.
65. The method of claim 64, further comprising routing one or more captured data streams to the viewing device in dependence on the metadata associated with the captured data streams.
66. The method of claim 64 or claim 65 wherein the viewing device is associated with a moderator of content or an editor of content.
67. A method for providing streaming services, comprising :
capturing data and providing a captured data stream at a plurality of capture devices; and
receiving the plurality of captured data streams and outputting at least one output stream at a server,
wherein the server is configured to dynamically group the captured data streams in dependence on metadata associated with the captured data streams.
68. The method of claim 67 further comprising receiving the metadata for a captured data stream from the associated capture device.
69. The method of claim 67 further comprising determining the metadata for a captured data stream at the server.
70. The method of any one of claims 67 to 69 further comprising configuring the server to dynamically group the streams in dependence on a current definition of a group .
71. The method of any one of claims 67 to 70 further comprising configuring the server to dynamically group the streams in dependence on the current metadata of the data streams.
72. The method of any one of claims 67 to 71 further comprising allocating the captured data streams to a group and outputting the captured data streams to an editing device.
73. The method of claim 72 further comprising controlling the output data stream from the server for the group by the editing device.
74. The method of any one of claims 67 to 73 further comprising prioritising the captured data streams in dependence on the metadata associated with the captured data streams of the group.
75. The method of any claim 74 further comprising prioritising the captured data stream within each group.
76. The method of any one of claim 67 to 75 further comprising applying control data to a captured data stream is provided from an external source.
77. The method of claim 76 wherein the control data comprises a set of rules.
78. The method of claim 77, wherein the set of rules define a group.
79. The method of claim 77 or claim 78, wherein the set of rules is used to allocate data to a group.
80. A method for providing streaming services, comprising:
capturing data and providing a captured data stream at a plurality of capture devices; and
receiving the plurality of captured data streams and outputting at least one output stream at a server,
wherein the server is configured to dynamically prioritise the captured data streams in dependence on metadata associated with the captured data streams.
81. The method of claim 80 wherein the server configured to dynamically prioritise the streams dependence on a current definition for prioritisation .
82. The method of claim 80 or claim 81 further comprising configuring the server to dynamically prioritise a data stream in dependence on the current metadata of that data stream.
83. The system of any one of claims 80 to 82 further comprising grouping the captured data streams, and then within each group the data streams are prioritised in dependence on the metadata associated with the captured data streams.
84. The method of any one of claims 80 to 83 further comprising using the priority of a captured data stream to determine the output of that captured data stream from the server.
85. The method of any one of claims 80 to 84 wherein the metadata for a captured data stream additionally includes a prioritisation score.
86. The method of claim 85 wherein the prioritisation score for a captured data stream is dynamic.
87. The method of claim 85 or claim 86 further comprising basing the prioritisation score on a reputation of a user associated with the capture device.
88. The method of any one of claims 85 to 87 wherein the metadata for a captured data stream additionally includes feedback data from a device receiving the output stream from the server.
89. The method of any one of claims 85 to 88, the device having made a request for content from the server, the method further comprising the prioritisation score being indicative of the relevance of the captured data stream to that request.
90. The method of claim 88 or claim 89 wherein the feedback is used to adjust a prioritisation score of the captured device data stream.
91. The method of any one of claims 85 to 90 wherein the prioritisation score is a viewer rating.
92. The method of any one of claims 80 to 91 wherein the metadata for a captured data stream additionally includes
feedforward data based on the capture device from which the captured data stream is provided.
93. The method of claim 92 further comprising adjusting the feedforward data to adjust a prioritisation score of the captured data stream.
94. The method of claim 93 wherein the prioritisation score is a capture device rating.
95. The method of any one of claims 80 to 94 further comprising editing the captured data streams in dependence on their priority.
96. The method of claim 95 wherein the server edits captured data stream based on a set of rules.
97. The method of claim 96, wherein the set of rules apply to a group.
98. The method of claim 96 or 97, wherein the set of rules is used to allocate a data stream to a group in dependence on the metadata of the data stream.
99. The method of any one of claims 95 to 98 further comprising additionally editing captured data streams in dependence on a received control signal.
100. The method of any one of claims 79 to 99 wherein the metadata for a captured data stream is received from the associated capture device.
101. The method of any one of claims 79 to 99 wherein the metadata for a captured data stream is determined at the server .
102. The method of any one of claims 79 to 101 wherein the server edits the captured data stream by applying an overlay to the captured data stream.
103. The method of any one of claims 97 to 102 wherein the server groups the captured data streams in dependence on the metadata associated with the captured data streams .
104. The method of claim 103 wherein the server edits the captured data stream by applying an overlay to all captured data streams allocated to a given group.
105. The method of claim 103 or claim 104 wherein the applied overlay indicates a branding.
106. The method of claim 104 or claim 105 wherein the applied overlay indicates a rights assignment of the content of the captured data stream.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP16729572.4A EP3298791A1 (en) | 2015-06-15 | 2016-06-15 | Media streaming |
US15/736,891 US11330316B2 (en) | 2015-06-15 | 2016-06-15 | Media streaming |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201562175878P | 2015-06-15 | 2015-06-15 | |
US62/175,878 | 2015-06-15 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2016202890A1 true WO2016202890A1 (en) | 2016-12-22 |
Family
ID=56132942
Family Applications (7)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/EP2016/063800 WO2016202886A1 (en) | 2015-06-15 | 2016-06-15 | Synchronisation of streamed content |
PCT/EP2016/063802 WO2016202887A1 (en) | 2015-06-15 | 2016-06-15 | Providing low & high quality streams |
PCT/EP2016/063804 WO2016202889A1 (en) | 2015-06-15 | 2016-06-15 | Providing extracts of streamed content |
PCT/EP2016/063806 WO2016202890A1 (en) | 2015-06-15 | 2016-06-15 | Media streaming |
PCT/EP2016/063797 WO2016202884A1 (en) | 2015-06-15 | 2016-06-15 | Controlling delivery of captured streams |
PCT/EP2016/063799 WO2016202885A1 (en) | 2015-06-15 | 2016-06-15 | Processing content streaming |
PCT/EP2016/063803 WO2016202888A1 (en) | 2015-06-15 | 2016-06-15 | Providing streamed content responsive to request |
Family Applications Before (3)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/EP2016/063800 WO2016202886A1 (en) | 2015-06-15 | 2016-06-15 | Synchronisation of streamed content |
PCT/EP2016/063802 WO2016202887A1 (en) | 2015-06-15 | 2016-06-15 | Providing low & high quality streams |
PCT/EP2016/063804 WO2016202889A1 (en) | 2015-06-15 | 2016-06-15 | Providing extracts of streamed content |
Family Applications After (3)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/EP2016/063797 WO2016202884A1 (en) | 2015-06-15 | 2016-06-15 | Controlling delivery of captured streams |
PCT/EP2016/063799 WO2016202885A1 (en) | 2015-06-15 | 2016-06-15 | Processing content streaming |
PCT/EP2016/063803 WO2016202888A1 (en) | 2015-06-15 | 2016-06-15 | Providing streamed content responsive to request |
Country Status (3)
Country | Link |
---|---|
US (7) | US11330316B2 (en) |
EP (7) | EP3308548A1 (en) |
WO (7) | WO2016202886A1 (en) |
Cited By (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2018183119A1 (en) * | 2017-03-27 | 2018-10-04 | Snap Inc. | Generating a stitched data stream |
US10311916B2 (en) | 2014-12-19 | 2019-06-04 | Snap Inc. | Gallery of videos set to an audio time line |
US10448201B1 (en) | 2014-06-13 | 2019-10-15 | Snap Inc. | Prioritization of messages within a message collection |
US10582277B2 (en) | 2017-03-27 | 2020-03-03 | Snap Inc. | Generating a stitched data stream |
US10581782B2 (en) | 2017-03-27 | 2020-03-03 | Snap Inc. | Generating a stitched data stream |
US10701462B2 (en) | 2018-04-12 | 2020-06-30 | International Business Machines Corporation | Generating video montage of an event |
US10893055B2 (en) | 2015-03-18 | 2021-01-12 | Snap Inc. | Geo-fence authorization provisioning |
US10990697B2 (en) | 2014-05-28 | 2021-04-27 | Snap Inc. | Apparatus and method for automated privacy protection in distributed images |
US11249617B1 (en) | 2015-01-19 | 2022-02-15 | Snap Inc. | Multichannel system |
US11372608B2 (en) | 2014-12-19 | 2022-06-28 | Snap Inc. | Gallery of messages from individuals with a shared interest |
US11445252B1 (en) * | 2021-07-08 | 2022-09-13 | Meta Platforms, Inc. | Prioritizing encoding of video data received by an online system to maximize visual quality while accounting for fixed computing capacity |
US11468615B2 (en) | 2015-12-18 | 2022-10-11 | Snap Inc. | Media overlay publication system |
US11496544B2 (en) | 2015-05-05 | 2022-11-08 | Snap Inc. | Story and sub-story navigation |
US11741136B2 (en) | 2014-09-18 | 2023-08-29 | Snap Inc. | Geolocation-based pictographs |
US12113764B2 (en) | 2014-10-02 | 2024-10-08 | Snap Inc. | Automated management of ephemeral message collections |
Families Citing this family (83)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2008109172A1 (en) | 2007-03-07 | 2008-09-12 | Wiklof Christopher A | Recorder with retrospective capture |
US10847184B2 (en) * | 2007-03-07 | 2020-11-24 | Knapp Investment Company Limited | Method and apparatus for initiating a live video stream transmission |
GB201415917D0 (en) * | 2014-09-09 | 2014-10-22 | Piksel Inc | Automated compliance management |
EP3308548A1 (en) * | 2015-06-15 | 2018-04-18 | Piksel, Inc. | Processing content streaming |
US9955061B2 (en) * | 2016-08-03 | 2018-04-24 | International Business Machines Corporation | Obtaining camera device image data representing an event |
US10798387B2 (en) * | 2016-12-12 | 2020-10-06 | Netflix, Inc. | Source-consistent techniques for predicting absolute perceptual video quality |
CN107257494B (en) * | 2017-01-06 | 2020-12-11 | 深圳市纬氪智能科技有限公司 | Sports event shooting method and shooting system thereof |
KR102650650B1 (en) * | 2017-01-20 | 2024-03-25 | 한화비전 주식회사 | Video management system and video management method |
US10986384B2 (en) * | 2017-04-14 | 2021-04-20 | Facebook, Inc. | Modifying video data captured by a client device based on a request received by a different client device receiving the captured video data |
US10404840B1 (en) * | 2018-04-27 | 2019-09-03 | Banjo, Inc. | Ingesting streaming signals |
US10257058B1 (en) * | 2018-04-27 | 2019-04-09 | Banjo, Inc. | Ingesting streaming signals |
US10581945B2 (en) | 2017-08-28 | 2020-03-03 | Banjo, Inc. | Detecting an event from signal data |
US10552683B2 (en) * | 2018-04-27 | 2020-02-04 | Banjo, Inc. | Ingesting streaming signals |
US10324948B1 (en) | 2018-04-27 | 2019-06-18 | Banjo, Inc. | Normalizing ingested signals |
US11025693B2 (en) | 2017-08-28 | 2021-06-01 | Banjo, Inc. | Event detection from signal data removing private information |
US20190251138A1 (en) | 2018-02-09 | 2019-08-15 | Banjo, Inc. | Detecting events from features derived from multiple ingested signals |
US10313413B2 (en) | 2017-08-28 | 2019-06-04 | Banjo, Inc. | Detecting events from ingested communication signals |
WO2019051605A1 (en) | 2017-09-15 | 2019-03-21 | Imagine Communications Corp. | Systems and methods for playout of fragmented video content |
US20190104325A1 (en) * | 2017-10-04 | 2019-04-04 | Livecloudtv, Llc | Event streaming with added content and context |
US10931989B2 (en) * | 2017-12-01 | 2021-02-23 | Intel Corporation | Multi-output synchronization |
US11051049B2 (en) * | 2018-02-06 | 2021-06-29 | Phenix Real Time Solutions, Inc. | Simulating a local experience by live streaming sharable viewpoints of a live event |
US10324935B1 (en) | 2018-02-09 | 2019-06-18 | Banjo, Inc. | Presenting event intelligence and trends tailored per geographic area granularity |
US10313865B1 (en) | 2018-04-27 | 2019-06-04 | Banjo, Inc. | Validating and supplementing emergency call information |
US10261846B1 (en) | 2018-02-09 | 2019-04-16 | Banjo, Inc. | Storing and verifying the integrity of event related data |
US10970184B2 (en) | 2018-02-09 | 2021-04-06 | Banjo, Inc. | Event detection removing private information |
US10585724B2 (en) | 2018-04-13 | 2020-03-10 | Banjo, Inc. | Notifying entities of relevant events |
US10674197B2 (en) * | 2018-02-28 | 2020-06-02 | At&T Intellectual Property I, L.P. | Media content distribution system and methods for use therewith |
US10966001B2 (en) * | 2018-04-05 | 2021-03-30 | Tvu Networks Corporation | Remote cloud-based video production system in an environment where there is network delay |
US11463747B2 (en) | 2018-04-05 | 2022-10-04 | Tvu Networks Corporation | Systems and methods for real time control of a remote video production with multiple streams |
US11212431B2 (en) | 2018-04-06 | 2021-12-28 | Tvu Networks Corporation | Methods and apparatus for remotely controlling a camera in an environment with communication latency |
US10327116B1 (en) | 2018-04-27 | 2019-06-18 | Banjo, Inc. | Deriving signal location from signal content |
US10353934B1 (en) | 2018-04-27 | 2019-07-16 | Banjo, Inc. | Detecting an event from signals in a listening area |
US10904720B2 (en) | 2018-04-27 | 2021-01-26 | safeXai, Inc. | Deriving signal location information and removing private information from it |
US10630990B1 (en) | 2018-05-01 | 2020-04-21 | Amazon Technologies, Inc. | Encoder output responsive to quality metric information |
US10958987B1 (en) * | 2018-05-01 | 2021-03-23 | Amazon Technologies, Inc. | Matching based on video data |
WO2019239396A1 (en) * | 2018-06-12 | 2019-12-19 | Kliots Shapira Ela | Method and system for automatic real-time frame segmentation of high resolution video streams into constituent features and modifications of features in each frame to simultaneously create multiple different linear views from same video source |
US10929404B2 (en) * | 2018-07-27 | 2021-02-23 | Facebook, Inc. | Streaming joins with synchronization via stream time estimations |
US10999292B2 (en) | 2018-08-24 | 2021-05-04 | Disney Enterprises, Inc. | Location-based restriction of content transmission |
US10972777B2 (en) * | 2018-10-24 | 2021-04-06 | At&T Intellectual Property I, L.P. | Method and apparatus for authenticating media based on tokens |
US11113270B2 (en) | 2019-01-24 | 2021-09-07 | EMC IP Holding Company LLC | Storing a non-ordered associative array of pairs using an append-only storage medium |
US10911827B2 (en) | 2019-03-08 | 2021-02-02 | Kiswe Mobile Inc. | Automatic rating of crowd-stream caller video |
CA3077449A1 (en) | 2019-04-04 | 2020-10-04 | Evertz Microsystems Ltd. | Systems and methods for determining delay of a plurality of media streams |
US11075971B2 (en) * | 2019-04-04 | 2021-07-27 | Evertz Microsystems Ltd. | Systems and methods for operating a media transmission network |
US11170782B2 (en) | 2019-04-08 | 2021-11-09 | Speech Cloud, Inc | Real-time audio transcription, video conferencing, and online collaboration system and methods |
US10674118B1 (en) * | 2019-05-01 | 2020-06-02 | CYBERTOKA Ltd. | Method and system for discreetly accessing security camera systems |
US11024190B1 (en) | 2019-06-04 | 2021-06-01 | Freedom Trail Realty School, Inc. | Online classes and learning compliance systems and methods |
US10582343B1 (en) | 2019-07-29 | 2020-03-03 | Banjo, Inc. | Validating and supplementing emergency call information |
US11438545B2 (en) | 2019-12-23 | 2022-09-06 | Carrier Corporation | Video image-based media stream bandwidth reduction |
US11463651B2 (en) | 2019-12-23 | 2022-10-04 | Carrier Corporation | Video frame-based media stream bandwidth reduction |
US11076197B1 (en) * | 2020-03-11 | 2021-07-27 | ViuCom Corp. | Synchronization of multiple video-on-demand streams and methods of broadcasting and displaying multiple concurrent live streams |
TWI822420B (en) * | 2020-04-17 | 2023-11-11 | 華南商業銀行股份有限公司 | Interactive system with handheld donation device and operating method thereof |
TWI779279B (en) * | 2020-04-17 | 2022-10-01 | 華南商業銀行股份有限公司 | Interactive system and method thereof |
TWI822419B (en) * | 2020-04-17 | 2023-11-11 | 華南商業銀行股份有限公司 | Portable and interactive system and operating method thereof |
US11604759B2 (en) | 2020-05-01 | 2023-03-14 | EMC IP Holding Company LLC | Retention management for data streams |
US11599546B2 (en) | 2020-05-01 | 2023-03-07 | EMC IP Holding Company LLC | Stream browser for data streams |
US11340834B2 (en) | 2020-05-22 | 2022-05-24 | EMC IP Holding Company LLC | Scaling of an ordered event stream |
US11360992B2 (en) * | 2020-06-29 | 2022-06-14 | EMC IP Holding Company LLC | Watermarking of events of an ordered event stream |
US11599420B2 (en) | 2020-07-30 | 2023-03-07 | EMC IP Holding Company LLC | Ordered event stream event retention |
US11340792B2 (en) | 2020-07-30 | 2022-05-24 | EMC IP Holding Company LLC | Ordered event stream merging |
US11513871B2 (en) | 2020-09-30 | 2022-11-29 | EMC IP Holding Company LLC | Employing triggered retention in an ordered event stream storage system |
US11354444B2 (en) | 2020-09-30 | 2022-06-07 | EMC IP Holding Company LLC | Access control for an ordered event stream storage system |
US11755555B2 (en) | 2020-10-06 | 2023-09-12 | EMC IP Holding Company LLC | Storing an ordered associative array of pairs using an append-only storage medium |
US11323497B2 (en) | 2020-10-07 | 2022-05-03 | EMC IP Holding Company LLC | Expiration of data streams for application programs in a streaming data storage platform |
US11599293B2 (en) | 2020-10-14 | 2023-03-07 | EMC IP Holding Company LLC | Consistent data stream replication and reconstruction in a streaming data storage platform |
US11354054B2 (en) | 2020-10-28 | 2022-06-07 | EMC IP Holding Company LLC | Compaction via an event reference in an ordered event stream storage system |
US11347568B1 (en) | 2020-12-18 | 2022-05-31 | EMC IP Holding Company LLC | Conditional appends in an ordered event stream storage system |
US11816065B2 (en) | 2021-01-11 | 2023-11-14 | EMC IP Holding Company LLC | Event level retention management for data streams |
US12099513B2 (en) | 2021-01-19 | 2024-09-24 | EMC IP Holding Company LLC | Ordered event stream event annulment in an ordered event stream storage system |
US11526297B2 (en) | 2021-01-19 | 2022-12-13 | EMC IP Holding Company LLC | Framed event access in an ordered event stream storage system |
US20220253892A1 (en) * | 2021-02-11 | 2022-08-11 | Allen Berube | Live content sharing within a social or non-social networking environment with rating and compensation system |
US11818408B2 (en) * | 2021-03-03 | 2023-11-14 | James R. Jeffries | Mechanism to automate the aggregation of independent videos for integration |
US11194638B1 (en) | 2021-03-12 | 2021-12-07 | EMC IP Holding Company LLC | Deferred scaling of an ordered event stream |
US11740828B2 (en) | 2021-04-06 | 2023-08-29 | EMC IP Holding Company LLC | Data expiration for stream storages |
US12001881B2 (en) | 2021-04-12 | 2024-06-04 | EMC IP Holding Company LLC | Event prioritization for an ordered event stream |
US11954537B2 (en) | 2021-04-22 | 2024-04-09 | EMC IP Holding Company LLC | Information-unit based scaling of an ordered event stream |
US11513714B2 (en) | 2021-04-22 | 2022-11-29 | EMC IP Holding Company LLC | Migration of legacy data into an ordered event stream |
US11681460B2 (en) | 2021-06-03 | 2023-06-20 | EMC IP Holding Company LLC | Scaling of an ordered event stream based on a writer group characteristic |
US11418557B1 (en) * | 2021-06-16 | 2022-08-16 | Meta Platforms, Inc. | Systems and methods for automatically switching between media streams |
US11735282B2 (en) | 2021-07-22 | 2023-08-22 | EMC IP Holding Company LLC | Test data verification for an ordered event stream storage system |
US11971850B2 (en) | 2021-10-15 | 2024-04-30 | EMC IP Holding Company LLC | Demoted data retention via a tiered ordered event stream data storage system |
US12013880B2 (en) * | 2021-10-18 | 2024-06-18 | Splunk Inc. | Dynamic resolution estimation for a detector |
EP4300978A1 (en) * | 2022-07-01 | 2024-01-03 | Genius Sports SS, LLC | Automatic alignment of video streams |
US20240048807A1 (en) * | 2022-08-08 | 2024-02-08 | Phenix Real Time Solutions, Inc. | Leveraging insights from real-time media stream in delayed versions |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2007117613A2 (en) * | 2006-04-06 | 2007-10-18 | Ferguson Kenneth H | Media content programming control method and apparatus |
US20090148124A1 (en) * | 2007-09-28 | 2009-06-11 | Yahoo!, Inc. | Distributed Automatic Recording of Live Event |
US20120265621A1 (en) * | 2011-04-14 | 2012-10-18 | Koozoo, Inc. | Method and system for an advanced player in a network of multiple live video sources |
US20130339539A1 (en) * | 2006-12-06 | 2013-12-19 | Carnegie Mellon University, Center for Technology Transfer | System and Method for Capturing, Editing, Searching, and Delivering Multi-Media Content |
US20140085485A1 (en) * | 2012-09-27 | 2014-03-27 | Edoardo Gavita | Machine-to-machine enabled image capture and processing |
US20140281011A1 (en) * | 2013-03-15 | 2014-09-18 | Watchitoo, Inc. | System and method for replicating a media stream |
US20150043892A1 (en) * | 2013-08-08 | 2015-02-12 | Nbcuniversal Media, Llc | Method and system for sourcing and editing live video |
US20150120839A1 (en) * | 2013-10-28 | 2015-04-30 | Verizon Patent And Licensing Inc. | Providing contextual messages relating to currently accessed content |
Family Cites Families (145)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5852435A (en) * | 1996-04-12 | 1998-12-22 | Avid Technology, Inc. | Digital multimedia editing and data management system |
US5991799A (en) * | 1996-12-20 | 1999-11-23 | Liberate Technologies | Information retrieval system using an internet multiplexer to focus user selection |
US7110025B1 (en) * | 1997-05-28 | 2006-09-19 | Eastman Kodak Company | Digital camera for capturing a sequence of full and reduced resolution digital images and storing motion and still digital image data |
US6631522B1 (en) * | 1998-01-20 | 2003-10-07 | David Erdelyi | Method and system for indexing, sorting, and displaying a video database |
AR020608A1 (en) * | 1998-07-17 | 2002-05-22 | United Video Properties Inc | A METHOD AND A PROVISION TO SUPPLY A USER REMOTE ACCESS TO AN INTERACTIVE PROGRAMMING GUIDE BY A REMOTE ACCESS LINK |
TW463503B (en) * | 1998-08-26 | 2001-11-11 | United Video Properties Inc | Television chat system |
TW447221B (en) * | 1998-08-26 | 2001-07-21 | United Video Properties Inc | Television message system |
US8521546B2 (en) | 1998-09-25 | 2013-08-27 | Health Hero Network | Dynamic modeling and scoring risk assessment |
JP4218185B2 (en) * | 2000-05-23 | 2009-02-04 | ソニー株式会社 | Program recording / reproducing system, program recording / reproducing method, and program recording / reproducing apparatus |
US7782363B2 (en) * | 2000-06-27 | 2010-08-24 | Front Row Technologies, Llc | Providing multiple video perspectives of activities through a data network to a remote multimedia server for selective display by remote viewing audiences |
US6788333B1 (en) * | 2000-07-07 | 2004-09-07 | Microsoft Corporation | Panoramic video |
BRPI0113763B8 (en) * | 2000-09-08 | 2016-09-13 | Ack Ventures Holdings Llc | video interaction method |
US8205237B2 (en) * | 2000-09-14 | 2012-06-19 | Cox Ingemar J | Identifying works, using a sub-linear time search, such as an approximate nearest neighbor search, for initiating a work-based action, such as an action on the internet |
JP4534333B2 (en) * | 2000-10-10 | 2010-09-01 | ソニー株式会社 | How to collect server operating costs |
US6985669B1 (en) * | 2000-11-13 | 2006-01-10 | Sony Corporation | Method and system for electronic capture of user-selected segments of a broadcast data signal |
US20080059532A1 (en) * | 2001-01-18 | 2008-03-06 | Kazmi Syed N | Method and system for managing digital content, including streaming media |
US7849207B2 (en) * | 2001-01-18 | 2010-12-07 | Yahoo! Inc. | Method and system for managing digital content, including streaming media |
JP4765182B2 (en) * | 2001-01-19 | 2011-09-07 | ソニー株式会社 | Interactive television communication method and interactive television communication client device |
US7725918B2 (en) * | 2001-08-03 | 2010-05-25 | Ericsson Television Inc. | Interactive television with embedded universal time codes |
US20030061206A1 (en) * | 2001-09-27 | 2003-03-27 | Richard Qian | Personalized content delivery and media consumption |
ATE312381T1 (en) * | 2002-02-06 | 2005-12-15 | Koninkl Philips Electronics Nv | FAST HASH-BASED METADATA RETRIEVAL FOR MULTIMEDIA OBJECTS |
US7703116B1 (en) * | 2003-07-11 | 2010-04-20 | Tvworks, Llc | System and method for construction, delivery and display of iTV applications that blend programming information of on-demand and broadcast service offerings |
AU2003239385A1 (en) * | 2002-05-10 | 2003-11-11 | Richard R. Reisman | Method and apparatus for browsing using multiple coordinated device |
JP2004005309A (en) * | 2002-06-03 | 2004-01-08 | Matsushita Electric Ind Co Ltd | Content delivery system, and method, or recording medium or program for the same |
WO2004004351A1 (en) * | 2002-07-01 | 2004-01-08 | Microsoft Corporation | A system and method for providing user control over repeating objects embedded in a stream |
US7908625B2 (en) * | 2002-10-02 | 2011-03-15 | Robertson Neil C | Networked multimedia system |
US7506355B2 (en) * | 2002-11-22 | 2009-03-17 | Microsoft Corporation | Tracking end-user content viewing and navigation |
US9756349B2 (en) * | 2002-12-10 | 2017-09-05 | Sony Interactive Entertainment America Llc | User interface, system and method for controlling a video stream |
US20060031889A1 (en) * | 2002-12-11 | 2006-02-09 | Bennett James D | Video processing system with simultaneous multiple outputs each with unique formats |
WO2004068320A2 (en) * | 2003-01-27 | 2004-08-12 | Vincent Wen-Jeng Lue | Method and apparatus for adapting web contents to different display area dimensions |
JP3941700B2 (en) * | 2003-01-28 | 2007-07-04 | ソニー株式会社 | Information processing apparatus, information processing method, and computer program |
US20050081138A1 (en) * | 2003-09-25 | 2005-04-14 | Voss James S. | Systems and methods for associating an image with a video file in which the image is embedded |
JP2005173787A (en) * | 2003-12-09 | 2005-06-30 | Fujitsu Ltd | Image processor detecting/recognizing moving body |
US7379791B2 (en) * | 2004-08-03 | 2008-05-27 | Uscl Corporation | Integrated metrology systems and information and control apparatus for interaction with integrated metrology systems |
WO2006107350A1 (en) * | 2005-04-05 | 2006-10-12 | Thomson Licensing | Multimedia content distribution system and method for multiple dwelling unit |
US20070035612A1 (en) * | 2005-08-09 | 2007-02-15 | Korneluk Jose E | Method and apparatus to capture and compile information perceivable by multiple handsets regarding a single event |
WO2007064641A2 (en) * | 2005-11-29 | 2007-06-07 | Google Inc. | Social and interactive applications for mass media |
US20070130597A1 (en) * | 2005-12-02 | 2007-06-07 | Alcatel | Network based instant replay and time shifted playback |
US20070157281A1 (en) * | 2005-12-23 | 2007-07-05 | United Video Properties, Inc. | Interactive media guidance system having multiple devices |
US8228372B2 (en) * | 2006-01-06 | 2012-07-24 | Agile Sports Technologies, Inc. | Digital video editing system |
US7734579B2 (en) * | 2006-02-08 | 2010-06-08 | At&T Intellectual Property I, L.P. | Processing program content material |
US20080046915A1 (en) * | 2006-08-01 | 2008-02-21 | Sbc Knowledge Ventures, L.P. | System and method of providing community content |
US8712563B2 (en) * | 2006-10-24 | 2014-04-29 | Slacker, Inc. | Method and apparatus for interactive distribution of digital content |
JP4345805B2 (en) * | 2006-12-07 | 2009-10-14 | ソニー株式会社 | Imaging apparatus, display control method, and program |
WO2008109172A1 (en) * | 2007-03-07 | 2008-09-12 | Wiklof Christopher A | Recorder with retrospective capture |
US8955030B2 (en) * | 2007-03-23 | 2015-02-10 | Wi-Lan, Inc. | System and method for personal content access |
GB0709026D0 (en) * | 2007-05-10 | 2007-06-20 | Isis Innovation | High speed imaging with slow scan cameras using pixel level dynami shuttering |
US20080313541A1 (en) * | 2007-06-14 | 2008-12-18 | Yahoo! Inc. | Method and system for personalized segmentation and indexing of media |
US8496177B2 (en) * | 2007-06-28 | 2013-07-30 | Hand Held Products, Inc. | Bar code reading terminal with video capturing mode |
JP2009017259A (en) * | 2007-07-05 | 2009-01-22 | Sony Corp | Electronic apparatus, content reproduction method, and program |
US8199196B2 (en) * | 2007-09-27 | 2012-06-12 | Alcatel Lucent | Method and apparatus for controlling video streams |
WO2009042858A1 (en) * | 2007-09-28 | 2009-04-02 | Gracenote, Inc. | Synthesizing a presentation of a multimedia event |
US20090119708A1 (en) * | 2007-11-07 | 2009-05-07 | Comcast Cable Holdings, Llc | User interface display without output device rendering |
US8136133B2 (en) * | 2007-11-13 | 2012-03-13 | Walker Digital, Llc | Methods and systems for broadcasting modified live media |
US20090132583A1 (en) * | 2007-11-16 | 2009-05-21 | Fuji Xerox Co., Ltd. | System and method for capturing, annotating, and linking media |
US8307395B2 (en) * | 2008-04-22 | 2012-11-06 | Porto Technology, Llc | Publishing key frames of a video content item being viewed by a first user to one or more second users |
US8239893B2 (en) * | 2008-05-12 | 2012-08-07 | Microsoft Corporation | Custom channels |
US20090323802A1 (en) * | 2008-06-27 | 2009-12-31 | Walters Clifford A | Compact camera-mountable video encoder, studio rack-mountable video encoder, configuration device, and broadcasting network utilizing the same |
JP5210063B2 (en) * | 2008-07-10 | 2013-06-12 | パナソニック株式会社 | Audio image recording device |
US20100077289A1 (en) * | 2008-09-08 | 2010-03-25 | Eastman Kodak Company | Method and Interface for Indexing Related Media From Multiple Sources |
WO2010029450A1 (en) * | 2008-09-15 | 2010-03-18 | Nxp B.V. | Systems and methods for providing fast video channel switching |
US9049477B2 (en) * | 2008-11-13 | 2015-06-02 | At&T Intellectual Property I, Lp | Apparatus and method for managing media content |
US20100131385A1 (en) * | 2008-11-25 | 2010-05-27 | Opanga Networks, Llc | Systems and methods for distribution of digital media content utilizing viral marketing over social networks |
US20100131998A1 (en) * | 2008-11-26 | 2010-05-27 | At&T Intellectual Property I, L.P. | Multimedia Frame Capture |
WO2010080639A2 (en) * | 2008-12-18 | 2010-07-15 | Band Crashers, Llc | Media systems and methods for providing synchronized multiple streaming camera signals of an event |
US8547482B2 (en) * | 2009-01-26 | 2013-10-01 | Sony Corporation | Display system and method for a freeze frame feature for streaming video |
US8767081B2 (en) * | 2009-02-23 | 2014-07-01 | Microsoft Corporation | Sharing video data associated with the same event |
US20100225811A1 (en) * | 2009-03-05 | 2010-09-09 | Nokia Corporation | Synchronization of Content from Multiple Content Sources |
US20100251292A1 (en) * | 2009-03-27 | 2010-09-30 | Sudharshan Srinivasan | Smartphone for interactive television |
KR20100123075A (en) * | 2009-05-14 | 2010-11-24 | 삼성전자주식회사 | Appratus and method for supporting scalability scheme in a video telephony system |
US8929331B2 (en) * | 2009-05-22 | 2015-01-06 | Broadcom Corporation | Traffic management in a hybrid femtocell/WLAN wireless enterprise network |
US8872910B1 (en) * | 2009-06-04 | 2014-10-28 | Masoud Vaziri | Method and apparatus for a compact and high resolution eye-view recorder |
US8214862B1 (en) * | 2009-07-13 | 2012-07-03 | Sprint Communications Company L.P. | Conserving bandwidth by restricting videos communicated in a wireless telecommunications network |
US20110052136A1 (en) * | 2009-09-01 | 2011-03-03 | Video Clarity, Inc. | Pattern-based monitoring of media synchronization |
US20110066744A1 (en) * | 2009-09-17 | 2011-03-17 | General Instrument Corporation | Transitioning between Multiple Services in an MPEG Stream |
US20110096828A1 (en) * | 2009-09-22 | 2011-04-28 | Qualcomm Incorporated | Enhanced block-request streaming using scalable encoding |
US20110068899A1 (en) * | 2009-09-23 | 2011-03-24 | Maksim Ioffe | Method and System for Controlling Electronic Devices |
US20110069179A1 (en) * | 2009-09-24 | 2011-03-24 | Microsoft Corporation | Network coordinated event capture and image storage |
US8644354B2 (en) * | 2009-10-14 | 2014-02-04 | Verizon Patent And Licensing Inc. | Methods and systems for automatically registering a mobile phone device with one or more media content access devices |
US9519728B2 (en) * | 2009-12-04 | 2016-12-13 | Time Warner Cable Enterprises Llc | Apparatus and methods for monitoring and optimizing delivery of content in a network |
US8806341B2 (en) * | 2009-12-10 | 2014-08-12 | Hulu, LLC | Method and apparatus for navigating a media program via a histogram of popular segments |
US20110191446A1 (en) * | 2010-01-29 | 2011-08-04 | Clarendon Foundation, Inc. | Storing and streaming media content |
US20110191439A1 (en) * | 2010-01-29 | 2011-08-04 | Clarendon Foundation, Inc. | Media content ingestion |
US20110202967A1 (en) * | 2010-02-12 | 2011-08-18 | Voice The Game, Llc | Apparatus and Method to Broadcast Layered Audio and Video Over Live Streaming Activities |
US20110206351A1 (en) * | 2010-02-25 | 2011-08-25 | Tal Givoli | Video processing system and a method for editing a video asset |
EP2403236B1 (en) * | 2010-06-29 | 2013-12-11 | Stockholms Universitet Holding AB | Mobile video mixing system |
US9633656B2 (en) * | 2010-07-27 | 2017-04-25 | Sony Corporation | Device registration process from second display |
US9401178B2 (en) * | 2010-08-26 | 2016-07-26 | Blast Motion Inc. | Event analysis system |
EP2434751A3 (en) * | 2010-09-28 | 2014-06-18 | Nokia Corporation | Method and apparatus for determining roles for media generation and compilation |
US20130212507A1 (en) * | 2010-10-11 | 2013-08-15 | Teachscape, Inc. | Methods and systems for aligning items of evidence to an evaluation framework |
US8805165B2 (en) * | 2010-11-09 | 2014-08-12 | Kodak Alaris Inc. | Aligning and summarizing different photo streams |
US8380039B2 (en) * | 2010-11-09 | 2013-02-19 | Eastman Kodak Company | Method for aligning different photo streams |
US20120114307A1 (en) * | 2010-11-09 | 2012-05-10 | Jianchao Yang | Aligning and annotating different photo streams |
GB2486002A (en) * | 2010-11-30 | 2012-06-06 | Youview Tv Ltd | Media Content Provision |
WO2012100114A2 (en) * | 2011-01-20 | 2012-07-26 | Kogeto Inc. | Multiple viewpoint electronic media system |
EP2479684B1 (en) * | 2011-01-21 | 2013-09-25 | NTT DoCoMo, Inc. | Method and evaluation server for evaluating a plurality of videos |
US8621355B2 (en) * | 2011-02-02 | 2013-12-31 | Apple Inc. | Automatic synchronization of media clips |
AU2011202182B1 (en) * | 2011-05-11 | 2011-10-13 | Frequency Ip Holdings, Llc | Creation and presentation of selective digital content feeds |
WO2012150602A1 (en) * | 2011-05-03 | 2012-11-08 | Yogesh Chunilal Rathod | A system and method for dynamically monitoring, recording, processing, attaching dynamic, contextual & accessible active links & presenting of physical or digital activities, actions, locations, logs, life stream, behavior & status |
US8970704B2 (en) * | 2011-06-07 | 2015-03-03 | Verizon Patent And Licensing Inc. | Network synchronized camera settings |
US9026596B2 (en) * | 2011-06-16 | 2015-05-05 | Microsoft Technology Licensing, Llc | Sharing of event media streams |
KR102044015B1 (en) * | 2011-08-02 | 2019-11-12 | (주)휴맥스 | Method of providing contents management list with associated media contents and apparatus for performing the same |
US8776145B2 (en) * | 2011-09-16 | 2014-07-08 | Elwha Llc | In-transit electronic media with location-based content |
US9280545B2 (en) * | 2011-11-09 | 2016-03-08 | Microsoft Technology Licensing, Llc | Generating and updating event-based playback experiences |
US20150222815A1 (en) * | 2011-12-23 | 2015-08-06 | Nokia Corporation | Aligning videos representing different viewpoints |
US8625027B2 (en) * | 2011-12-27 | 2014-01-07 | Home Box Office, Inc. | System and method for verification of media content synchronization |
US8966530B2 (en) * | 2011-12-29 | 2015-02-24 | Rovi Guides, Inc. | Systems and methods for presenting multiple assets in an interactive media guidance application |
US8768142B1 (en) * | 2012-01-26 | 2014-07-01 | Ambarella, Inc. | Video editing with connected high-resolution video camera and video cloud server |
US8645485B1 (en) * | 2012-01-30 | 2014-02-04 | Google Inc. | Social based aggregation of related media content |
JP2013214346A (en) * | 2012-03-09 | 2013-10-17 | Panasonic Corp | Imaging device and program |
US8719884B2 (en) * | 2012-06-05 | 2014-05-06 | Microsoft Corporation | Video identification and search |
US8938089B1 (en) * | 2012-06-26 | 2015-01-20 | Google Inc. | Detection of inactive broadcasts during live stream ingestion |
US20140007154A1 (en) | 2012-06-29 | 2014-01-02 | United Video Properties, Inc. | Systems and methods for providing individualized control of media assets |
US20140013342A1 (en) * | 2012-07-05 | 2014-01-09 | Comcast Cable Communications, Llc | Media Content Redirection |
CN102802021B (en) | 2012-08-08 | 2016-01-20 | 无锡天脉聚源传媒科技有限公司 | A kind of method of editing multimedia data and device |
US8682144B1 (en) * | 2012-09-17 | 2014-03-25 | Google Inc. | Method for synchronizing multiple audio signals |
US10291725B2 (en) * | 2012-11-21 | 2019-05-14 | H4 Engineering, Inc. | Automatic cameraman, automatic recording system and automatic recording network |
US9532095B2 (en) * | 2012-11-29 | 2016-12-27 | Fanvision Entertainment Llc | Mobile device with smart gestures |
US9129640B2 (en) * | 2012-12-12 | 2015-09-08 | Crowdflik, Inc. | Collaborative digital video platform that enables synchronized capture, curation and editing of multiple user-generated videos |
US9135956B2 (en) * | 2012-12-18 | 2015-09-15 | Realtek Semiconductor Corp. | Method and computer program product for establishing playback timing correlation between different contents to be playbacked |
EP2775731A1 (en) * | 2013-03-05 | 2014-09-10 | British Telecommunications public limited company | Provision of video data |
US20140328578A1 (en) | 2013-04-08 | 2014-11-06 | Thomas Shafron | Camera assembly, system, and method for intelligent video capture and streaming |
US9454789B2 (en) * | 2013-05-03 | 2016-09-27 | Digimarc Corporation | Watermarking and signal recognition for managing and sharing captured content, metadata discovery and related arrangements |
GB2515563A (en) * | 2013-06-28 | 2014-12-31 | F Secure Corp | Media sharing |
EP3014888A4 (en) * | 2013-06-28 | 2017-02-22 | INTEL Corporation | Live crowdsourced media streaming |
US10141022B2 (en) * | 2013-07-10 | 2018-11-27 | Htc Corporation | Method and electronic device for generating multiple point of view video |
US20150128174A1 (en) * | 2013-11-04 | 2015-05-07 | Broadcom Corporation | Selecting audio-video (av) streams associated with an event |
CN104904232A (en) * | 2013-12-26 | 2015-09-09 | 松下电器产业株式会社 | Video editing device |
US10521671B2 (en) * | 2014-02-28 | 2019-12-31 | Second Spectrum, Inc. | Methods and systems of spatiotemporal pattern recognition for video content development |
US9368151B2 (en) * | 2014-04-29 | 2016-06-14 | Evergig Music S.A.S.U. | Systems and methods for chronologically ordering digital media and approximating a timeline of an event |
US9294786B2 (en) * | 2014-07-07 | 2016-03-22 | International Business Machines Corporation | Coordination of video and/or audio recording |
CN105721763A (en) * | 2014-12-05 | 2016-06-29 | 深圳富泰宏精密工业有限公司 | System and method for composition of photos |
JP6741975B2 (en) * | 2014-12-09 | 2020-08-19 | パナソニックIpマネジメント株式会社 | Transmission method and transmission device |
WO2016105322A1 (en) * | 2014-12-25 | 2016-06-30 | Echostar Ukraine, L.L.C. | Simultaneously viewing multiple camera angles |
US20160292511A1 (en) * | 2015-03-31 | 2016-10-06 | Gopro, Inc. | Scene and Activity Identification in Video Summary Generation |
WO2016166764A1 (en) * | 2015-04-16 | 2016-10-20 | W.S.C. Sports Technologies Ltd. | System and method for creating and distributing multimedia content |
US9344751B1 (en) * | 2015-05-08 | 2016-05-17 | Istreamplanet Co. | Coordination of fault-tolerant video stream processing in cloud-based video streaming system |
US9554160B2 (en) * | 2015-05-18 | 2017-01-24 | Zepp Labs, Inc. | Multi-angle video editing based on cloud video sharing |
EP3308548A1 (en) * | 2015-06-15 | 2018-04-18 | Piksel, Inc. | Processing content streaming |
US10623801B2 (en) * | 2015-12-17 | 2020-04-14 | James R. Jeffries | Multiple independent video recording integration |
GB2546247A (en) * | 2016-01-05 | 2017-07-19 | Oclu Ltd | Video recording system and method |
US9992539B2 (en) * | 2016-04-05 | 2018-06-05 | Google Llc | Identifying viewing characteristics of an audience of a content channel |
US10388324B2 (en) * | 2016-05-31 | 2019-08-20 | Dropbox, Inc. | Synchronizing edits to low- and high-resolution versions of digital videos |
US10674187B2 (en) * | 2016-07-26 | 2020-06-02 | Facebook, Inc. | Systems and methods for shared broadcasting |
WO2019046905A1 (en) * | 2017-09-08 | 2019-03-14 | Jump Corporation Pty Ltd | Distributed camera network |
-
2016
- 2016-06-15 EP EP16732550.5A patent/EP3308548A1/en not_active Ceased
- 2016-06-15 US US15/736,891 patent/US11330316B2/en active Active
- 2016-06-15 WO PCT/EP2016/063800 patent/WO2016202886A1/en active Application Filing
- 2016-06-15 US US15/736,483 patent/US10856029B2/en active Active
- 2016-06-15 WO PCT/EP2016/063802 patent/WO2016202887A1/en active Application Filing
- 2016-06-15 WO PCT/EP2016/063804 patent/WO2016202889A1/en active Application Filing
- 2016-06-15 US US15/736,654 patent/US11425439B2/en active Active
- 2016-06-15 WO PCT/EP2016/063806 patent/WO2016202890A1/en active Application Filing
- 2016-06-15 US US15/736,934 patent/US10791356B2/en active Active
- 2016-06-15 EP EP16729571.6A patent/EP3298790A1/en active Pending
- 2016-06-15 WO PCT/EP2016/063797 patent/WO2016202884A1/en active Application Filing
- 2016-06-15 EP EP16729569.0A patent/EP3298788A1/en active Pending
- 2016-06-15 EP EP16729570.8A patent/EP3298789B1/en active Active
- 2016-06-15 US US15/736,564 patent/US20180139472A1/en not_active Abandoned
- 2016-06-15 EP EP16729572.4A patent/EP3298791A1/en active Pending
- 2016-06-15 EP EP16731557.1A patent/EP3298793A1/en not_active Ceased
- 2016-06-15 US US15/736,943 patent/US10567822B2/en active Active
- 2016-06-15 WO PCT/EP2016/063799 patent/WO2016202885A1/en active Application Filing
- 2016-06-15 US US15/736,966 patent/US10674196B2/en active Active
- 2016-06-15 EP EP16730351.0A patent/EP3298792A1/en active Pending
- 2016-06-15 WO PCT/EP2016/063803 patent/WO2016202888A1/en active Application Filing
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2007117613A2 (en) * | 2006-04-06 | 2007-10-18 | Ferguson Kenneth H | Media content programming control method and apparatus |
US20130339539A1 (en) * | 2006-12-06 | 2013-12-19 | Carnegie Mellon University, Center for Technology Transfer | System and Method for Capturing, Editing, Searching, and Delivering Multi-Media Content |
US20090148124A1 (en) * | 2007-09-28 | 2009-06-11 | Yahoo!, Inc. | Distributed Automatic Recording of Live Event |
US20120265621A1 (en) * | 2011-04-14 | 2012-10-18 | Koozoo, Inc. | Method and system for an advanced player in a network of multiple live video sources |
US20140085485A1 (en) * | 2012-09-27 | 2014-03-27 | Edoardo Gavita | Machine-to-machine enabled image capture and processing |
US20140281011A1 (en) * | 2013-03-15 | 2014-09-18 | Watchitoo, Inc. | System and method for replicating a media stream |
US20150043892A1 (en) * | 2013-08-08 | 2015-02-12 | Nbcuniversal Media, Llc | Method and system for sourcing and editing live video |
US20150120839A1 (en) * | 2013-10-28 | 2015-04-30 | Verizon Patent And Licensing Inc. | Providing contextual messages relating to currently accessed content |
Non-Patent Citations (5)
Title |
---|
BENJAMIN TAG ET AL: "Collaborative storyboarding through democratization of content production", ADVANCES IN COMPUTER ENTERTAINMENT TECHNOLOGY, ACM, 2 PENN PLAZA, SUITE 701 NEW YORK NY 10121-0701 USA, 11 November 2014 (2014-11-11), pages 1 - 4, XP058064259, ISBN: 978-1-4503-2945-3, DOI: 10.1145/2663806.2663875 * |
CRISTIAN HESSELMAN ET AL: "Sharing enriched multimedia experiences across heterogeneous network infrastructures", IEEE COMMUNICATIONS MAGAZINE, IEEE SERVICE CENTER, PISCATAWAY, US, vol. 48, no. 6, 1 June 2010 (2010-06-01), pages 54 - 65, XP011469851, ISSN: 0163-6804, DOI: 10.1109/MCOM.2010.5473865 * |
KEVIN MURRAY ET AL: "TM-CSS0017r7: TM-SM-CSS-0017 Companion Screens and Supplementary Streams Report", DVB, DIGITAL VIDEO BROADCASTING, C/O EBU - 17A ANCIENNE ROUTE - CH-1218 GRAND SACONNEX, GENEVA - SWITZERLAND, 18 February 2013 (2013-02-18), XP017840082 * |
PEREIRA F ET AL: "Multimedia Retrieval and Delivery: Essential Metadata Challenges and Standards", PROCEEDINGS OF THE IEEE, IEEE. NEW YORK, US, vol. 96, no. 4, 1 April 2008 (2008-04-01), pages 721 - 744, XP011206030, ISSN: 0018-9219 * |
WEN GAO ET AL: "Vlogging", ACM COMPUTING SURVEYS, ACM, NEW YORK, NY, US, US, vol. 42, no. 4, 23 June 2010 (2010-06-23), pages 1 - 57, XP058090589, ISSN: 0360-0300, DOI: 10.1145/1749603.1749606 * |
Cited By (32)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10990697B2 (en) | 2014-05-28 | 2021-04-27 | Snap Inc. | Apparatus and method for automated privacy protection in distributed images |
US11972014B2 (en) | 2014-05-28 | 2024-04-30 | Snap Inc. | Apparatus and method for automated privacy protection in distributed images |
US11166121B2 (en) | 2014-06-13 | 2021-11-02 | Snap Inc. | Prioritization of messages within a message collection |
US10448201B1 (en) | 2014-06-13 | 2019-10-15 | Snap Inc. | Prioritization of messages within a message collection |
US10623891B2 (en) | 2014-06-13 | 2020-04-14 | Snap Inc. | Prioritization of messages within a message collection |
US10659914B1 (en) | 2014-06-13 | 2020-05-19 | Snap Inc. | Geo-location based event gallery |
US11317240B2 (en) | 2014-06-13 | 2022-04-26 | Snap Inc. | Geo-location based event gallery |
US10779113B2 (en) | 2014-06-13 | 2020-09-15 | Snap Inc. | Prioritization of messages within a message collection |
US11741136B2 (en) | 2014-09-18 | 2023-08-29 | Snap Inc. | Geolocation-based pictographs |
US12113764B2 (en) | 2014-10-02 | 2024-10-08 | Snap Inc. | Automated management of ephemeral message collections |
US10311916B2 (en) | 2014-12-19 | 2019-06-04 | Snap Inc. | Gallery of videos set to an audio time line |
US11372608B2 (en) | 2014-12-19 | 2022-06-28 | Snap Inc. | Gallery of messages from individuals with a shared interest |
US10811053B2 (en) | 2014-12-19 | 2020-10-20 | Snap Inc. | Routing messages by message parameter |
US11250887B2 (en) | 2014-12-19 | 2022-02-15 | Snap Inc. | Routing messages by message parameter |
US11803345B2 (en) | 2014-12-19 | 2023-10-31 | Snap Inc. | Gallery of messages from individuals with a shared interest |
US11783862B2 (en) | 2014-12-19 | 2023-10-10 | Snap Inc. | Routing messages by message parameter |
US10580458B2 (en) | 2014-12-19 | 2020-03-03 | Snap Inc. | Gallery of videos set to an audio time line |
US11249617B1 (en) | 2015-01-19 | 2022-02-15 | Snap Inc. | Multichannel system |
US10893055B2 (en) | 2015-03-18 | 2021-01-12 | Snap Inc. | Geo-fence authorization provisioning |
US11902287B2 (en) | 2015-03-18 | 2024-02-13 | Snap Inc. | Geo-fence authorization provisioning |
US11496544B2 (en) | 2015-05-05 | 2022-11-08 | Snap Inc. | Story and sub-story navigation |
US11468615B2 (en) | 2015-12-18 | 2022-10-11 | Snap Inc. | Media overlay publication system |
US11830117B2 (en) | 2015-12-18 | 2023-11-28 | Snap Inc | Media overlay publication system |
US11349796B2 (en) | 2017-03-27 | 2022-05-31 | Snap Inc. | Generating a stitched data stream |
WO2018183119A1 (en) * | 2017-03-27 | 2018-10-04 | Snap Inc. | Generating a stitched data stream |
US11297399B1 (en) | 2017-03-27 | 2022-04-05 | Snap Inc. | Generating a stitched data stream |
US11558678B2 (en) | 2017-03-27 | 2023-01-17 | Snap Inc. | Generating a stitched data stream |
US10581782B2 (en) | 2017-03-27 | 2020-03-03 | Snap Inc. | Generating a stitched data stream |
US10582277B2 (en) | 2017-03-27 | 2020-03-03 | Snap Inc. | Generating a stitched data stream |
US10701462B2 (en) | 2018-04-12 | 2020-06-30 | International Business Machines Corporation | Generating video montage of an event |
US11716513B2 (en) | 2021-07-08 | 2023-08-01 | Meta Platforms, Inc. | Prioritizing encoding of video data received by an online system to maximize visual quality while accounting for fixed computing capacity |
US11445252B1 (en) * | 2021-07-08 | 2022-09-13 | Meta Platforms, Inc. | Prioritizing encoding of video data received by an online system to maximize visual quality while accounting for fixed computing capacity |
Also Published As
Publication number | Publication date |
---|---|
US20190110096A1 (en) | 2019-04-11 |
WO2016202887A1 (en) | 2016-12-22 |
WO2016202884A1 (en) | 2016-12-22 |
US10791356B2 (en) | 2020-09-29 |
EP3308548A1 (en) | 2018-04-18 |
WO2016202886A1 (en) | 2016-12-22 |
WO2016202889A1 (en) | 2016-12-22 |
US11330316B2 (en) | 2022-05-10 |
US20180220165A1 (en) | 2018-08-02 |
US20180139472A1 (en) | 2018-05-17 |
US20180176607A1 (en) | 2018-06-21 |
US10567822B2 (en) | 2020-02-18 |
US10856029B2 (en) | 2020-12-01 |
EP3298793A1 (en) | 2018-03-28 |
US20180184138A1 (en) | 2018-06-28 |
US11425439B2 (en) | 2022-08-23 |
EP3298789A1 (en) | 2018-03-28 |
EP3298788A1 (en) | 2018-03-28 |
EP3298791A1 (en) | 2018-03-28 |
WO2016202885A1 (en) | 2016-12-22 |
EP3298789B1 (en) | 2024-08-21 |
WO2016202888A1 (en) | 2016-12-22 |
US20180184133A1 (en) | 2018-06-28 |
US20180199082A1 (en) | 2018-07-12 |
US10674196B2 (en) | 2020-06-02 |
EP3298790A1 (en) | 2018-03-28 |
EP3298792A1 (en) | 2018-03-28 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11330316B2 (en) | Media streaming | |
US20210142375A1 (en) | Method and apparatus for managing advertisement content and personal content | |
US20140086562A1 (en) | Method And Apparatus For Creating A Composite Video From Multiple Sources | |
US10826949B2 (en) | Distributed control of media content item during webcast | |
CN105120304B (en) | Information display method, apparatus and system | |
US20170134783A1 (en) | High quality video sharing systems | |
US10368134B2 (en) | Live content streaming system and method | |
US11825142B2 (en) | Systems and methods for multimedia swarms | |
US8065709B2 (en) | Methods, systems, and computer program products for providing multi-viewpoint media content services | |
EP3384678B1 (en) | Network-based event recording | |
JP5092000B2 (en) | Video processing apparatus, method, and video processing system | |
US9129641B2 (en) | Method and system for media selection and sharing | |
CN111279709B (en) | Providing video recommendations | |
KR20180020203A (en) | Streaming media presentation system | |
US20140129570A1 (en) | Crowdsourcing Supplemental Content | |
CN102216945B (en) | Networking with media fingerprints | |
US11245947B1 (en) | Device and method for capturing, processing, linking and monetizing a plurality of video and audio recordings from different points of view (POV) | |
KR20100069139A (en) | System and method for personalized broadcast based on dynamic view selection of multiple video cameras, storage medium storing the same | |
JP2022000955A (en) | Scene sharing system | |
Mate | Automatic Mobile Video Remixing and Collaborative Watching Systems | |
Andrews | ‘This is FilmFour–Not Some Cheesy Pseudo-Hollywood Thing!’: The Opening Night Simulcast of FilmFour on Channel 4 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 16729572 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2016729572 Country of ref document: EP |