US20190082202A1 - Content delivery via private wireless network - Google Patents
Content delivery via private wireless network Download PDFInfo
- Publication number
- US20190082202A1 US20190082202A1 US16/188,891 US201816188891A US2019082202A1 US 20190082202 A1 US20190082202 A1 US 20190082202A1 US 201816188891 A US201816188891 A US 201816188891A US 2019082202 A1 US2019082202 A1 US 2019082202A1
- Authority
- US
- United States
- Prior art keywords
- data
- video imagery
- imagery data
- meta
- recited
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
- 238000000034 method Methods 0.000 claims description 26
- 238000004891 communication Methods 0.000 claims description 18
- 230000005540 biological transmission Effects 0.000 claims description 17
- 230000008569 process Effects 0.000 claims description 12
- 230000015572 biosynthetic process Effects 0.000 claims description 4
- 230000037081 physical activity Effects 0.000 claims 5
- 238000010586 diagram Methods 0.000 description 9
- 230000006855 networking Effects 0.000 description 5
- 230000006870 function Effects 0.000 description 4
- 238000007726 management method Methods 0.000 description 4
- 230000000386 athletic effect Effects 0.000 description 3
- 238000005755 formation reaction Methods 0.000 description 3
- 230000001413 cellular effect Effects 0.000 description 2
- 230000006835 compression Effects 0.000 description 2
- 238000007906 compression Methods 0.000 description 2
- 238000001914 filtration Methods 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 239000000463 material Substances 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000006243 chemical reaction Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 239000000446 fuel Substances 0.000 description 1
- 230000006698 induction Effects 0.000 description 1
- 230000007935 neutral effect Effects 0.000 description 1
- 230000002085 persistent effect Effects 0.000 description 1
- 238000009877 rendering Methods 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/234—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
- H04N21/23418—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/21—Server components or server architectures
- H04N21/218—Source of audio or video content, e.g. local disk arrays
- H04N21/2187—Live feed
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/25—Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
- H04N21/266—Channel or content management, e.g. generation and management of keys and entitlement messages in a conditional access system, merging a VOD unicast channel into a multicast channel
- H04N21/2665—Gathering content from different sources, e.g. Internet and satellite
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/25—Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
- H04N21/266—Channel or content management, e.g. generation and management of keys and entitlement messages in a conditional access system, merging a VOD unicast channel into a multicast channel
- H04N21/2668—Creating a channel for a dedicated end-user group, e.g. insertion of targeted commercials based on end-user profiles
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/80—Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
- H04N21/81—Monomedia components thereof
- H04N21/8126—Monomedia components thereof involving additional data, e.g. news, sports, stocks, weather forecasts
- H04N21/8133—Monomedia components thereof involving additional data, e.g. news, sports, stocks, weather forecasts specifically related to the content, e.g. biography of the actors in a movie, detailed information about an article seen in a video program
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/238—Interfacing the downstream path of the transmission network, e.g. adapting the transmission rate of a video stream to network bandwidth; Processing of multiplex streams
Definitions
- a sporting venue may provide camera equipment that broadcast video imagery to one or more video screens, such as for video replay for referees or audience members.
- individuals may utilize a cellular wireless network to access typically facilitated for professional sporting events in which the stadiums, or other venues, are configured with expensive broadcasting equipment.
- FIG. 1 is a block diagram illustrative of one or more components for facilitating the collecting and processing of video imagery data
- FIG. 2 is a block diagram illustrating the interaction of components of FIG. 1 is a venue
- FIG. 3 is a flow diagram illustrative of a video imagery collection routine implemented by a control component
- FIG. 4 is a block diagram illustrative of a screen display generated by an output device for associating meta-data to collected video imagery data
- FIG. 5 is a flow diagram illustrative of a video requesting processing routine implemented by a control component.
- the present application corresponds to systems and methods for obtaining, processing and delivery video imagery. More specifically, aspects of the present invention correspond to the collection, processing and publication of video imagery corresponding to local sporting events.
- the present application includes one or more components for establishing a private wireless network between video input devices and a video imagery processing device. The one or more components may be embodied in a portable housing that facilitates the configuration of video collection devices, the private wireless network and the processing of the video imagery data independent of any infrastructure provided by the sporting venue.
- the present application includes the generation or identification, by a control component, of meta-data associated with an assessment of collected video imagery data.
- the video imagery data can be segmented into individual plays and the meta-data corresponds to an assessment of the segmented plays (e.g., the segmented video imagery).
- the present application can correspond to the publication or transmission of processed video imagery data and any associated meta-data to one or more output devices based on an assessed role (e.g., a coach) or other permissions.
- the output devices may include additional functionality that facilitates the control and management of the video imagery data based on the meta-data.
- one aspect of the present application corresponds to the provisioning and configuration of components utilized to collect and process video imagery data in a venue independent of any infrastructure available or provided by the venue.
- the components may be embodied in a portable housing to facilitate mobility and transportability.
- FIG. 1 is a block diagram illustrative of one or more components for facilitating the collecting and processing of video imagery data.
- a housing component 100 can correspond to a portable container for facilitating the storage and transportation of other components.
- the housing 100 can include various attributes to facilitate transportation including handles, wheels, etc.
- the housing 100 can also include attributes to facilitate storage including mounts, brackets, sub-storage components, and the like.
- the housing 100 may include one or more openings, latches, doors, etc. that facilitate connection of the components with other components or equipment such as power sources (e.g., generators).
- the housing 100 may be constructed of a variety of materials, or combinations of materials, based on the intended use or desired level of portability/durability.
- the one or more components can include a power component 102 .
- the power component 102 is illustratively configured to obtain power from a local power source (e.g., generator or locally provided power socket) and provide power to other components.
- the power component 102 can include extension cords, fuses, safety devices, and the like. Additionally, in some embodiments, the power component can include power generating equipment such as solar cells, fuel cells, gasoline engine, induction devices and the like.
- the one or more components can further include a control component 104 .
- the control component corresponds to one or more computing devices operative to obtain collected video imagery data and process video imagery data.
- the control component 104 can obtain power from the power component 102 .
- the control component 104 can be connected to other devices via a private network facilitated by a networking component 106 .
- control component 104 can correspond to a variety of computing devices including, but not limited to, personal computing devices, server computing devices, laptop or tablet computing devices, mobile devices, gaming devices.
- control component 104 is depicted in FIG. 1 as a single computing device or component, this is illustrative only.
- the control component 104 may be embodied in a plurality of computing devices.
- any computing device implementing at least some aspects of the functionality associated with the control component 104 may include memory, processing unit(s) and computer readable medium drive(s), all of which may communicate with one another by way of a communication bus.
- the network interface may provide connectivity over the network or other networks or computer systems.
- the processing unit(s) may communicate to and from memory containing program instructions that the processing unit(s) executes in order to operate.
- the memory generally includes RAM, ROM or other persistent and auxiliary memory.
- the control component 104 can include an external data source interface component for obtaining external information from network data sources.
- an external data source interface component for obtaining external information from network data sources.
- the control component 104 may include any one of a number of additional hardware and software components that would be utilized in the illustrative computerized network environment to carry out the illustrative functions of the control component 104 or any of the individually identified components.
- the network component 106 can include one or more hardware networking devices that can establish a private network between the control component 104 and one or more networked input and output devices 108 .
- the private network corresponds to a private, wireless network for facilitating the transmission of video imagery data from an input device to the control component 104 (directly or indirectly).
- the wireless networks can utilize one or more communication protocols including, but are not limited to, Bluetooth, the family of IEEE 802.11 technical standards (“WiFi”), the IEEE 802.16 standards (“WiMax), short message service (”SMS′′), voice over IP (“VoIP”) as well as various generation cellular air interface communication protocols (including, but not limited to, air interface protocols based on CDMA, TDMA, GSM, WCDMA, CDMA2000, TD-SCDMA, WTDMA, LTE, LTE-A, OFDMA and similar technologies).
- WiFi the family of IEEE 802.11 technical standards
- WiMax the IEEE 802.16 standards
- SMS′′ short message service
- VoIP voice over IP
- various generation cellular air interface communication protocols including, but not limited to, air interface protocols based on CDMA, TDMA, GSM, WCDMA, CDMA2000, TD-SCDMA, WTDMA, LTE, LTE-A, OFDMA and similar technologies.
- the networked input and output devices 108 can correspond to a wide variety of devices.
- input devices can correspond to video cameras or audio input devices that are configured to capture data associated with a sporting event.
- the input devices may be specifically configured to transmit captured data directly via the private wireless network.
- the input devices may be configured to utilize one or more additional hardware devices that function to transmit data via the private wireless network.
- the input devices may be associated with an additional wireless transmitter component that can be configured with additional security or authentication credentials.
- the output devices can also correspond to a wide variety of devices.
- the output devices can correspond to mobile devices, portable computing devices, tablet computing devices, and the like that are configure to receive processed video imagery data and display the data.
- the output devices may be individually addressable on the private, wireless network.
- the output devices may be associated with particular users or subscribers. For example, one or more coaches of a team, such as football team, may be associated with particular devices such that processed video imagery data may be directly transmitted to particular coaches.
- the output devices may include additional software functionality, such as a browser software application or viewer application, that facilitates searching for video imagery segments, management of stored video imagery segments or controls for playing video imagery segments on one or more displays, such as a display associated with the output device.
- FIG. 2 illustrative interactions between the components of the present application will be described.
- various input devices 108 have been placed in a venue 200 and are configured to capture video imagery data.
- the control component 104 , input/output devices 108 are capable to of exchanging data via a private wireless network established via the networking component 106 .
- the input devices 108 capture data, such as plays of a game (e.g., a football game) at (1).
- the input devices may be controlled to only capture a sequence of images best approximated to a single play, such as by including user input or other cues.
- the input devices may also be configured to continuously capture data that may encompass any number of plays.
- the captured data is then transmitted to the control component 104 via the wireless network at (2).
- the transmission of the video imagery data can include the utilization of various communication and networking protocols, such as encryption, compression, etc.
- the captured data is transmitted to the control component 104 for further processing at (3).
- the control component 104 obtains the transmitted data at (4).
- the control component 104 obtains categorization data.
- the categorization data can include an assessment of one or more attributes of a play, such as a type of play (e.g., a running play), an outcome of the play (a sack), progress (e.g., number of yards gained), etc.
- the categorization data may be obtained by a user interfacing with the control component 104 and watching the video imagery.
- the categorization data, or a portion thereof may be automatically generated by the control component 104 via software processes.
- the control component 104 can also process the incoming video imagery data so that it most closely resembles increments of single plays (e.g., video editing).
- the control component 104 associates meta-data with the obtained video imagery data.
- the meta-data can directly correspond to the categorization data mentioned above.
- the meta-data may be based on the categorization data and can include additional data such as attributes of how the categorization information was obtained, external reference information (e.g., location, timestamp, etc.).
- the control component 104 can store the video imagery data and associated meta-data, such as in a data store.
- the control component 104 can publish or transmit the video imagery data to one or more devices.
- the control component 104 transmits data to an output device, which displays the data at (9).
- the published video imagery data may be transmitted to single devices or sets of devices as specified by logical rules or other processing instructions.
- the output devices may include viewer or browser functionality that facilitates the sorting of a set of video imagery data and control of the playback of selected video imagery segments.
- the output devices may facilitate the specification of a set of searching criteria that identifies specific video imagery segments matching, or most closely matching, the input criteria.
- the transmission of the video imagery segments may be considered to be transmitted to at least one output device in a real-time or substantially real-time basis.
- the control component 104 may be configured to transmit at least a portion of a collected video imagery segment and its associated meta-data within a specified time window or upon completion of the processing of the meta-data.
- captured video imagery data is then transmitted to the control component 104 via the network, such as a wireless network or a wired network.
- the captured video imagery data is transmitted from one or more video cameras configured to utilization wireless communication protocols.
- the transmission of the video imagery data can include the utilization of various communication and networking protocols, such as encryption, compression, enhancement, conversion, etc.
- one or more video input devices may utilize a hardwired connection.
- the control component 104 can be configured to accept video imagery data from individuals or devices that are otherwise not dedicated to collecting video imagery data in a peer-to-peer model. For example, fans/attendees at a sporting event can volunteer to provide video imagery data via a mobile device that can establish communications, at least temporarily, with the control component 104 .
- the control component 104 obtains categorization data.
- the categorization data can include an assessment of one or more attributes of a play, such as a type of play (e.g., a running play), an outcome of the play (e.g., a touchdown, a sack, a gain of yard, a loss of yard, a fumble, an interception, etc.), progress (e.g., a number of yards gained/lost), and other information relating to, or describing, aspects the paly or set of plays.
- a type of play e.g., a running play
- an outcome of the play e.g., a touchdown, a sack, a gain of yard, a loss of yard, a fumble, an interception, etc.
- progress e.g., a number of yards gained/lost
- the categorization data can include a reference to a designated play, identified players/athletes, designated formations and the like. Still further, the categorization data can include reference to externally provided information including timestamp information location information, weather information (e.g., temperature, wind speed, humidity) and the like.
- the categorization data may be obtained by a user interfacing with the control component 104 and watching the video imagery.
- An illustrative screen display for collecting categorization data will be described below with regard to FIG. 4 .
- the assessment of the one or more attributes may be defined such that a user may select from a set of available attributes.
- a user may be presented with a set of categories, or buckets, in which the selection of the categorization data corresponds to a selection of one or more applicable categories.
- the user may be presented with the opportunity to designate categorization information in a more free-form manner, such as text inputs, audio commentary and the like.
- the assessment of the one or more attribute may be more rigidly defined such that a user must provide an answer, or otherwise select a default value, for a set of defined attributes.
- the categorization data, or a portion thereof may be automatically generated by the control component 104 via software processes. To the extent required, the control component 104 can also process the incoming video imagery data so that it most closely resembles increments of single plays (e.g., video editing).
- the control component 104 associates meta-data with the obtained video imagery data.
- the meta-data can directly correspond to the categorization data mentioned above.
- the categorization data may directly correspond to meta-data or be mapped to defined meta-data.
- the meta-data may be based on the categorization data and can include additional data such as attributes of how the categorization information was obtained, external reference information (e.g., location, timestamp, etc.).
- the control component 104 may utilize filtering rules to determine whether one or more categorization data may be omitted or otherwise not identified as meta-data.
- control component 104 may filter weather related data if weather data collected for a particular video imagery segment is not substantially different from previously collected weather information. Additionally, the control component 104 can also apply rules or other business logic to the collected categorization data based on an identity/reputation of a user provided the categorization data. In this capacity, some of the categorization data may be given a weight or otherwise filtered based on previous experiences, permissions, etc. In this example, the control component 104 may be able to accept video imagery data or categorization data on a peer-to-peer basis.
- the control component 104 can store the video imagery data and associated meta-data, such as in a data store.
- the control component 104 can publish or transmit the video imagery data to one or more devices, such as an addressable device corresponding a particular coach or set of coaches.
- the output devices may be receive the video imagery data via a wireless network connection and utilize one or more software applications for managing the processed video imagery data, such as a viewer or browser application.
- the publication of the video imagery data may be accomplished on a real-time or substantially real-time basis.
- the control application 104 may transmit video imagery data in accordance with a maximum time window such that all transmissions are intended to be transmitted prior to the expiration of the maximum time window.
- the timing associated with the transmission of the video imagery data may correspond to the intended recipient.
- an output device associated with a coach may be associated with a smaller time window for receiving transmission compared to an output device associated with a user in the general public.
- the transmission of the video imagery segments to an output device can include the transmission of notifications or links that facilitate the access of video imagery segments directly from a storage component or service.
- the screen display 400 includes a first portion 402 for rendering collected video imagery data.
- the first portion 402 would illustratively include any one of a variety of video controls, such as play, pause, progress bars, etc. Additionally, the first portion could include further controls to modify the size of the portion relative to the screen display 400 .
- the screen display 400 can also include a second portion 404 for managing a set of collected video imagery data.
- the second portion can include controls 406 for selecting which video imagery clip to play, identification information 408 describing the video imagery segment (e.g., the play called) and additional controls 410 for deleting, reordering or otherwise managing the set of video imagery data.
- the screen display 400 can also incorporate or adopt other file management techniques or paradigms to assist in the management of video imagery data.
- the screen display 400 can also include a third portion 412 for collecting categorization information/data, namely, an assessment of a type of play.
- the screen display 400 includes a portion 412 A for assessing a type of offensive play 414 (e.g., a run or a pass), a portion 412 B for assessing defensive formations 414 (e.g., man or zone) a portion 412 C for assessing special teams plays 414 (e.g., punt, field goal, kickoff, etc.).
- the portion 412 / 414 can correspond to display objects that are selectable by a user via input device such as touch screens, mouse, keyboards, audio controls, etc.
- a user may interface with the control component 104 via a separate device such that the user inputs are provided to the control component 104 .
- a user may access a portion of the screen display 400 via a mobile device in communication with the control component 104 via short range wireless connections, such as the Bluetooth wireless communication protocol.
- short range wireless connections such as the Bluetooth wireless communication protocol.
- the screen display 400 can also include a fourth portion 416 for obtaining categorization information regarding a result of the particular video imagery segment.
- results corresponding to a football event
- results can include, but are not limited to, touchdown, incomplete pass, fumble sack, interception, penalty, safety, blocked kick, broken play, etc.
- the result can be one of an interpreted positive result, negative results or neutral result (e.g., “positive yards,” “negative yards,” or “no gain”).
- the results discussed are illustrative and are not limited to the only type of results assessments possible.
- the screen display 400 can also include a fifth portion 418 for obtaining categorization information regarding an assessed result of the particular video imagery segment.
- the result corresponds to a categorization of the specific amount of yards gained or lost on a particular play.
- the fifth portion 418 includes controls 418 A- 418 D that facilitate the selection of a number of yards gained or lost in a play.
- the spacing of the controls 418 can also correspond to an approximate measure of the yards in which the spacing on the screen display 400 of controls 418 A are closer than the spacing of controls 418 B, which are closer than the spacing for 418 C and 418 D.
- the user is provided an appearance of a net result, which may facilitate collection of the categorization data in real-time.
- the screen display 400 can further include a sixth portion 420 for controlling the publication or storage of the collected video imagery segment and the categorization information.
- the portion 420 can include a control to publish the video segment such that a video segment is automatically sent to one or more designated individuals, such as a coach or a set of coaches. In other embodiments, a user may be prompted to identify to whom the video imagery segment may be sent. Still further, the portion 420 can also include controls for facilitating the storage or archiving of the video imagery segment.
- routine 500 can be implemented to process request for specific video imagery data.
- the video imagery data may be automatically transmitted without requiring the processing of a request.
- at least some portion of routine 500 may be implemented in an output device 108 to facilitate searching for video imagery segments provided by the control component 104 or otherwise made accessible to the output device by the control component. Accordingly, while routine 500 will be described with regard to implementation by the control component 104 , aspects of the routine 500 can be implemented by other components as appropriate.
- the control component 104 obtains a request for video imagery data including selection criteria.
- the selection criteria can correspond to one or more of the categorization data.
- the categorization data can include an assessment of one or more attributes of a play, such as a type of play (e.g., a running play), an outcome of the play (e.g., a touchdown, a sack, a gain of yard, a loss of yard, a fumble, an interception, etc.), progress (e.g., a number of yards gained/lost), and other information relating to, or describing, aspects the paly or set of plays.
- a type of play e.g., a running play
- an outcome of the play e.g., a touchdown, a sack, a gain of yard, a loss of yard, a fumble, an interception, etc.
- progress e.g., a number of yards gained/lost
- the categorization data can include a reference to a designated play, identified players/athletes, designated formations and the like. Still further, the categorization data can include reference to externally provided information including timestamp information location information, weather information (e.g., temperature, wind speed, humidity) and the like.
- the control component 104 identifies categorization data corresponding to data request by filtering out the selection criteria, utilizing profile information or otherwise obtaining additional information.
- the control component 104 identifies meta-data that will be used to search the obtained video imagery data.
- the meta-data can directly correspond to the categorization data mentioned above.
- the categorization data may directly correspond to meta-data or be mapped to defined meta-data.
- the meta-data may be based on the categorization data and can include additional data such as attributes of how the categorization information was obtained, external reference information (e.g., location, timestamp, etc.).
- the control component 104 identifies the previously stored video imagery data and associated meta-data, such as by transmitting a query to a data store.
- the control component 104 transmits or otherwise publishes the identified video imagery data.
- the control component 104 can identify potentially relevant video imagery data via a traditional screen interface corresponding to a text search, Web search and the like.
- the control component 104 can identify potentially relevant video imagery data in a graphical form.
- the results of the search can be encompassed in graphical objects in which the more relevant video imagery segments are represented in larger dimension graphical object.
- the control component 104 can automatically begin broadcasting or transmitting the identified video imagery data.
- the routine ends.
- All of the processes described herein may be embodied in, and fully automated via, software code modules executed by one or more general purpose computers or processors.
- the code modules may be stored in any type of computer-readable medium or other computer storage device. Some or all the methods may alternatively be embodied in specialized computer hardware.
- the components referred to herein may be implemented in hardware, software, firmware or a combination thereof.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Databases & Information Systems (AREA)
- Physics & Mathematics (AREA)
- Astronomy & Astrophysics (AREA)
- General Physics & Mathematics (AREA)
- Closed-Circuit Television Systems (AREA)
- Library & Information Science (AREA)
Abstract
Description
- This application is a continuation of U.S. patent application Ser. No. 15/348,896, filed Nov. 10, 2016, which is a continuation of U.S. patent application Ser. No. 14/479,033, filed Sep. 5, 2014, now U.S. Pat. No. 9,497,490, issued Nov. 15, 2016, the entirety of each of which is incorporated by reference herein.
- Generally described, the use of video imagery in sporting events has existed in many forms. For example, a sporting venue may provide camera equipment that broadcast video imagery to one or more video screens, such as for video replay for referees or audience members. In other examples, individuals may utilize a cellular wireless network to access typically facilitated for professional sporting events in which the stadiums, or other venues, are configured with expensive broadcasting equipment.
- In addition to financial limitations associated with providing video imagery to different sporting events, some athletic associations or governing bodies had prevented the utilization of video imagery by the teams participating in a local sporting event. As policies or rules are modified to allow for the use of video imagery by teams, many venues, such as parks, high schools, etc., do not have the same infrastructure to facilitate the collection and processing of video imagery.
- Aspects and advantages of the embodiments provided herein are described with reference to the following detailed description in conjunction with the accompanying drawings. Throughout the drawings, reference numbers may be re-used to indicate correspondence between referenced elements. The drawings are provided to illustrate example embodiments described herein and are not intended to limit the scope of the disclosure.
-
FIG. 1 is a block diagram illustrative of one or more components for facilitating the collecting and processing of video imagery data; -
FIG. 2 is a block diagram illustrating the interaction of components ofFIG. 1 is a venue; -
FIG. 3 is a flow diagram illustrative of a video imagery collection routine implemented by a control component; -
FIG. 4 is a block diagram illustrative of a screen display generated by an output device for associating meta-data to collected video imagery data; and -
FIG. 5 is a flow diagram illustrative of a video requesting processing routine implemented by a control component. - Generally described, the present application corresponds to systems and methods for obtaining, processing and delivery video imagery. More specifically, aspects of the present invention correspond to the collection, processing and publication of video imagery corresponding to local sporting events. In one aspect, the present application includes one or more components for establishing a private wireless network between video input devices and a video imagery processing device. The one or more components may be embodied in a portable housing that facilitates the configuration of video collection devices, the private wireless network and the processing of the video imagery data independent of any infrastructure provided by the sporting venue. In another aspect, the present application includes the generation or identification, by a control component, of meta-data associated with an assessment of collected video imagery data. The video imagery data can be segmented into individual plays and the meta-data corresponds to an assessment of the segmented plays (e.g., the segmented video imagery). In still a further aspect, the present application can correspond to the publication or transmission of processed video imagery data and any associated meta-data to one or more output devices based on an assessed role (e.g., a coach) or other permissions. The output devices may include additional functionality that facilitates the control and management of the video imagery data based on the meta-data.
- The present application will be discussed with regard illustrative examples and embodiments, illustrative screen displays and types of sporting events. Such examples and embodiments are solely meant to be illustrative and should not necessarily be construed as limiting. For example, while many illustrative examples will be described with regard to football-based examples, the scope of the present application should not be limited specifically to attributes of football sporting events or specific categorizations or meta-data associated with football sporting events.
- As mentioned above, one aspect of the present application corresponds to the provisioning and configuration of components utilized to collect and process video imagery data in a venue independent of any infrastructure available or provided by the venue. Illustratively, the components may be embodied in a portable housing to facilitate mobility and transportability.
-
FIG. 1 is a block diagram illustrative of one or more components for facilitating the collecting and processing of video imagery data. With reference toFIG. 1 , ahousing component 100 can correspond to a portable container for facilitating the storage and transportation of other components. Thehousing 100 can include various attributes to facilitate transportation including handles, wheels, etc. Thehousing 100 can also include attributes to facilitate storage including mounts, brackets, sub-storage components, and the like. Still further, thehousing 100 may include one or more openings, latches, doors, etc. that facilitate connection of the components with other components or equipment such as power sources (e.g., generators). Illustratively, thehousing 100 may be constructed of a variety of materials, or combinations of materials, based on the intended use or desired level of portability/durability. - With continued reference to
FIG. 1 , the one or more components can include apower component 102. Thepower component 102 is illustratively configured to obtain power from a local power source (e.g., generator or locally provided power socket) and provide power to other components. Thepower component 102 can include extension cords, fuses, safety devices, and the like. Additionally, in some embodiments, the power component can include power generating equipment such as solar cells, fuel cells, gasoline engine, induction devices and the like. - The one or more components can further include a
control component 104. Illustratively, the control component corresponds to one or more computing devices operative to obtain collected video imagery data and process video imagery data. Thecontrol component 104 can obtain power from thepower component 102. Additionally, thecontrol component 104 can be connected to other devices via a private network facilitated by anetworking component 106. One skilled in the relevant art will appreciate thatcontrol component 104 can correspond to a variety of computing devices including, but not limited to, personal computing devices, server computing devices, laptop or tablet computing devices, mobile devices, gaming devices. - While the
control component 104 is depicted inFIG. 1 as a single computing device or component, this is illustrative only. Thecontrol component 104 may be embodied in a plurality of computing devices. Generally described, any computing device implementing at least some aspects of the functionality associated with thecontrol component 104 may include memory, processing unit(s) and computer readable medium drive(s), all of which may communicate with one another by way of a communication bus. The network interface may provide connectivity over the network or other networks or computer systems. The processing unit(s) may communicate to and from memory containing program instructions that the processing unit(s) executes in order to operate. The memory generally includes RAM, ROM or other persistent and auxiliary memory. Thecontrol component 104 can include an external data source interface component for obtaining external information from network data sources. One skilled in the relevant art will also appreciate that thecontrol component 104 may include any one of a number of additional hardware and software components that would be utilized in the illustrative computerized network environment to carry out the illustrative functions of thecontrol component 104 or any of the individually identified components. - Illustratively, the
network component 106 can include one or more hardware networking devices that can establish a private network between thecontrol component 104 and one or more networked input andoutput devices 108. Illustratively, at least some portion of the private network corresponds to a private, wireless network for facilitating the transmission of video imagery data from an input device to the control component 104 (directly or indirectly). The wireless networks can utilize one or more communication protocols including, but are not limited to, Bluetooth, the family of IEEE 802.11 technical standards (“WiFi”), the IEEE 802.16 standards (“WiMax), short message service (”SMS″), voice over IP (“VoIP”) as well as various generation cellular air interface communication protocols (including, but not limited to, air interface protocols based on CDMA, TDMA, GSM, WCDMA, CDMA2000, TD-SCDMA, WTDMA, LTE, LTE-A, OFDMA and similar technologies). - The networked input and
output devices 108 can correspond to a wide variety of devices. For example, input devices can correspond to video cameras or audio input devices that are configured to capture data associated with a sporting event. The input devices may be specifically configured to transmit captured data directly via the private wireless network. Alternatively, the input devices may be configured to utilize one or more additional hardware devices that function to transmit data via the private wireless network. For example, the input devices may be associated with an additional wireless transmitter component that can be configured with additional security or authentication credentials. - The output devices can also correspond to a wide variety of devices. For example, the output devices can correspond to mobile devices, portable computing devices, tablet computing devices, and the like that are configure to receive processed video imagery data and display the data. The output devices may be individually addressable on the private, wireless network. Additionally, the output devices may be associated with particular users or subscribers. For example, one or more coaches of a team, such as football team, may be associated with particular devices such that processed video imagery data may be directly transmitted to particular coaches. Still further, in some embodiments, the output devices may include additional software functionality, such as a browser software application or viewer application, that facilitates searching for video imagery segments, management of stored video imagery segments or controls for playing video imagery segments on one or more displays, such as a display associated with the output device.
- Turning now to
FIG. 2 , illustrative interactions between the components of the present application will be described. For purposes of the illustration ofFIG. 2 , it assumed thatvarious input devices 108 have been placed in avenue 200 and are configured to capture video imagery data. Additionally, thecontrol component 104, input/output devices 108 are capable to of exchanging data via a private wireless network established via thenetworking component 106. With reference toFIG. 2 , theinput devices 108 capture data, such as plays of a game (e.g., a football game) at (1). The input devices may be controlled to only capture a sequence of images best approximated to a single play, such as by including user input or other cues. The input devices may also be configured to continuously capture data that may encompass any number of plays. The captured data is then transmitted to thecontrol component 104 via the wireless network at (2). The transmission of the video imagery data can include the utilization of various communication and networking protocols, such as encryption, compression, etc. - The captured data is transmitted to the
control component 104 for further processing at (3). Illustratively, thecontrol component 104 obtains the transmitted data at (4). Additionally, at (5), thecontrol component 104 obtains categorization data. As will be described in greater detail below, the categorization data can include an assessment of one or more attributes of a play, such as a type of play (e.g., a running play), an outcome of the play (a sack), progress (e.g., number of yards gained), etc. Illustratively, the categorization data may be obtained by a user interfacing with thecontrol component 104 and watching the video imagery. In another embodiment, the categorization data, or a portion thereof, may be automatically generated by thecontrol component 104 via software processes. To the extent required, thecontrol component 104 can also process the incoming video imagery data so that it most closely resembles increments of single plays (e.g., video editing). - At (6), the
control component 104 associates meta-data with the obtained video imagery data. The meta-data can directly correspond to the categorization data mentioned above. Alternatively, the meta-data may be based on the categorization data and can include additional data such as attributes of how the categorization information was obtained, external reference information (e.g., location, timestamp, etc.). At (7), thecontrol component 104 can store the video imagery data and associated meta-data, such as in a data store. In another example, thecontrol component 104 can publish or transmit the video imagery data to one or more devices. - As illustrated in
FIG. 2 , at (8), thecontrol component 104 transmits data to an output device, which displays the data at (9). Illustratively, the published video imagery data may be transmitted to single devices or sets of devices as specified by logical rules or other processing instructions. In some embodiments, the output devices may include viewer or browser functionality that facilitates the sorting of a set of video imagery data and control of the playback of selected video imagery segments. For example, the output devices may facilitate the specification of a set of searching criteria that identifies specific video imagery segments matching, or most closely matching, the input criteria. Additionally, in some embodiments of the present application, the transmission of the video imagery segments may be considered to be transmitted to at least one output device in a real-time or substantially real-time basis. For example, thecontrol component 104 may be configured to transmit at least a portion of a collected video imagery segment and its associated meta-data within a specified time window or upon completion of the processing of the meta-data. - Turning now to
FIG. 3 , a flow diagram of adata processing routine 300 implemented by thecontrol component 104 will be described. Atblock 302, captured video imagery data is then transmitted to thecontrol component 104 via the network, such as a wireless network or a wired network. As described above, the captured video imagery data is transmitted from one or more video cameras configured to utilization wireless communication protocols. The transmission of the video imagery data can include the utilization of various communication and networking protocols, such as encryption, compression, enhancement, conversion, etc. In other embodiment, one or more video input devices may utilize a hardwired connection. Additionally, in some embodiments, thecontrol component 104 can be configured to accept video imagery data from individuals or devices that are otherwise not dedicated to collecting video imagery data in a peer-to-peer model. For example, fans/attendees at a sporting event can volunteer to provide video imagery data via a mobile device that can establish communications, at least temporarily, with thecontrol component 104. - At
block 304, thecontrol component 104 obtains categorization data. As previously discussed, in an illustrative embodiment related to video imagery data corresponding to an athletic event, the categorization data can include an assessment of one or more attributes of a play, such as a type of play (e.g., a running play), an outcome of the play (e.g., a touchdown, a sack, a gain of yard, a loss of yard, a fumble, an interception, etc.), progress (e.g., a number of yards gained/lost), and other information relating to, or describing, aspects the paly or set of plays. In another example, the categorization data can include a reference to a designated play, identified players/athletes, designated formations and the like. Still further, the categorization data can include reference to externally provided information including timestamp information location information, weather information (e.g., temperature, wind speed, humidity) and the like. - Illustratively, the categorization data may be obtained by a user interfacing with the
control component 104 and watching the video imagery. An illustrative screen display for collecting categorization data will be described below with regard toFIG. 4 . In some embodiments, the assessment of the one or more attributes may be defined such that a user may select from a set of available attributes. In one example, a user may be presented with a set of categories, or buckets, in which the selection of the categorization data corresponds to a selection of one or more applicable categories. In another example, the user may be presented with the opportunity to designate categorization information in a more free-form manner, such as text inputs, audio commentary and the like. In other embodiments, the assessment of the one or more attribute may be more rigidly defined such that a user must provide an answer, or otherwise select a default value, for a set of defined attributes. In another embodiment, the categorization data, or a portion thereof, may be automatically generated by thecontrol component 104 via software processes. To the extent required, thecontrol component 104 can also process the incoming video imagery data so that it most closely resembles increments of single plays (e.g., video editing). - At
block 306, thecontrol component 104 associates meta-data with the obtained video imagery data. The meta-data can directly correspond to the categorization data mentioned above. For example, the categorization data may directly correspond to meta-data or be mapped to defined meta-data. Alternatively, the meta-data may be based on the categorization data and can include additional data such as attributes of how the categorization information was obtained, external reference information (e.g., location, timestamp, etc.). Additionally, in this embodiment, thecontrol component 104 may utilize filtering rules to determine whether one or more categorization data may be omitted or otherwise not identified as meta-data. For example, thecontrol component 104 may filter weather related data if weather data collected for a particular video imagery segment is not substantially different from previously collected weather information. Additionally, thecontrol component 104 can also apply rules or other business logic to the collected categorization data based on an identity/reputation of a user provided the categorization data. In this capacity, some of the categorization data may be given a weight or otherwise filtered based on previous experiences, permissions, etc. In this example, thecontrol component 104 may be able to accept video imagery data or categorization data on a peer-to-peer basis. - At
block 308, thecontrol component 104 can store the video imagery data and associated meta-data, such as in a data store. In another embodiment, thecontrol component 104 can publish or transmit the video imagery data to one or more devices, such as an addressable device corresponding a particular coach or set of coaches. As previously specified, in this embodiment, the output devices may be receive the video imagery data via a wireless network connection and utilize one or more software applications for managing the processed video imagery data, such as a viewer or browser application. Additionally, in some embodiments, the publication of the video imagery data may be accomplished on a real-time or substantially real-time basis. For example, thecontrol application 104 may transmit video imagery data in accordance with a maximum time window such that all transmissions are intended to be transmitted prior to the expiration of the maximum time window. In another example, the timing associated with the transmission of the video imagery data may correspond to the intended recipient. For example, an output device associated with a coach may be associated with a smaller time window for receiving transmission compared to an output device associated with a user in the general public. Still further, in other embodiments, the transmission of the video imagery segments to an output device can include the transmission of notifications or links that facilitate the access of video imagery segments directly from a storage component or service. Atblock 310, the routine ends. - Turning now to
FIG. 4 , anillustrative screen display 400 generated by thecontrol component 104 will be described. One skilled in the relevant art will appreciate thescreen display 400 is illustrative in nature and that one or more screen displays could implemented in accordance with the present application incorporating additional features or components. With reference toFIG. 4 , thescreen display 400 includes afirst portion 402 for rendering collected video imagery data. Thefirst portion 402 would illustratively include any one of a variety of video controls, such as play, pause, progress bars, etc. Additionally, the first portion could include further controls to modify the size of the portion relative to thescreen display 400. - The
screen display 400 can also include asecond portion 404 for managing a set of collected video imagery data. The second portion can includecontrols 406 for selecting which video imagery clip to play,identification information 408 describing the video imagery segment (e.g., the play called) andadditional controls 410 for deleting, reordering or otherwise managing the set of video imagery data. Illustratively, thescreen display 400 can also incorporate or adopt other file management techniques or paradigms to assist in the management of video imagery data. - With continued reference to
FIG. 4 , thescreen display 400 can also include a third portion 412 for collecting categorization information/data, namely, an assessment of a type of play. For example, thescreen display 400 includes aportion 412A for assessing a type of offensive play 414 (e.g., a run or a pass), aportion 412B for assessing defensive formations 414 (e.g., man or zone) aportion 412C for assessing special teams plays 414 (e.g., punt, field goal, kickoff, etc.). Illustratively, the portion 412/414 can correspond to display objects that are selectable by a user via input device such as touch screens, mouse, keyboards, audio controls, etc. In other embodiments, a user may interface with thecontrol component 104 via a separate device such that the user inputs are provided to thecontrol component 104. For example, a user may access a portion of thescreen display 400 via a mobile device in communication with thecontrol component 104 via short range wireless connections, such as the Bluetooth wireless communication protocol. Additionally, theplays 414 illustrated inFIG. 4 are illustrative and are not limited to the only type of assessment possible. - The
screen display 400 can also include afourth portion 416 for obtaining categorization information regarding a result of the particular video imagery segment. Examples of results, corresponding to a football event, can include, but are not limited to, touchdown, incomplete pass, fumble sack, interception, penalty, safety, blocked kick, broken play, etc. In other embodiments, the result can be one of an interpreted positive result, negative results or neutral result (e.g., “positive yards,” “negative yards,” or “no gain”). Similarly, the results discussed are illustrative and are not limited to the only type of results assessments possible. - The
screen display 400 can also include afifth portion 418 for obtaining categorization information regarding an assessed result of the particular video imagery segment. With reference toFIG. 4 , in an illustrative example corresponding to a football event, the result corresponds to a categorization of the specific amount of yards gained or lost on a particular play. As illustrated inscreen display 400, thefifth portion 418 includescontrols 418A-418D that facilitate the selection of a number of yards gained or lost in a play. In some embodiments, as illustrated inFIG. 4 , the spacing of thecontrols 418 can also correspond to an approximate measure of the yards in which the spacing on thescreen display 400 ofcontrols 418A are closer than the spacing ofcontrols 418B, which are closer than the spacing for 418C and 418D. In this case, the user is provided an appearance of a net result, which may facilitate collection of the categorization data in real-time. - The
screen display 400 can further include asixth portion 420 for controlling the publication or storage of the collected video imagery segment and the categorization information. For example, theportion 420 can include a control to publish the video segment such that a video segment is automatically sent to one or more designated individuals, such as a coach or a set of coaches. In other embodiments, a user may be prompted to identify to whom the video imagery segment may be sent. Still further, theportion 420 can also include controls for facilitating the storage or archiving of the video imagery segment. - Turning now to
FIG. 5 , a flow diagram of adata request routine 500 implemented by thecontrol component 104 will be described. Illustratively, routine 500 can be implemented to process request for specific video imagery data. In some embodiments, however, the video imagery data may be automatically transmitted without requiring the processing of a request. Still in other embodiments, at least some portion of routine 500 may be implemented in anoutput device 108 to facilitate searching for video imagery segments provided by thecontrol component 104 or otherwise made accessible to the output device by the control component. Accordingly, while routine 500 will be described with regard to implementation by thecontrol component 104, aspects of the routine 500 can be implemented by other components as appropriate. - At
block 502, thecontrol component 104 obtains a request for video imagery data including selection criteria. The selection criteria can correspond to one or more of the categorization data. As previously discussed, in an illustrative embodiment related to video imagery data corresponding to an athletic event, the categorization data can include an assessment of one or more attributes of a play, such as a type of play (e.g., a running play), an outcome of the play (e.g., a touchdown, a sack, a gain of yard, a loss of yard, a fumble, an interception, etc.), progress (e.g., a number of yards gained/lost), and other information relating to, or describing, aspects the paly or set of plays. In another example, the categorization data can include a reference to a designated play, identified players/athletes, designated formations and the like. Still further, the categorization data can include reference to externally provided information including timestamp information location information, weather information (e.g., temperature, wind speed, humidity) and the like. Atblock 504, thecontrol component 104 identifies categorization data corresponding to data request by filtering out the selection criteria, utilizing profile information or otherwise obtaining additional information. - At
block 506, thecontrol component 104 identifies meta-data that will be used to search the obtained video imagery data. As previously discussed, the meta-data can directly correspond to the categorization data mentioned above. For example, the categorization data may directly correspond to meta-data or be mapped to defined meta-data. Alternatively, the meta-data may be based on the categorization data and can include additional data such as attributes of how the categorization information was obtained, external reference information (e.g., location, timestamp, etc.). Atblock 508, thecontrol component 104 identifies the previously stored video imagery data and associated meta-data, such as by transmitting a query to a data store. - At
block 510, thecontrol component 104 transmits or otherwise publishes the identified video imagery data. In one example, thecontrol component 104 can identify potentially relevant video imagery data via a traditional screen interface corresponding to a text search, Web search and the like. In another example, thecontrol component 104 can identify potentially relevant video imagery data in a graphical form. For example, the results of the search can be encompassed in graphical objects in which the more relevant video imagery segments are represented in larger dimension graphical object. In another embodiment, thecontrol component 104 can automatically begin broadcasting or transmitting the identified video imagery data. Atblock 512, the routine ends. - All of the processes described herein may be embodied in, and fully automated via, software code modules executed by one or more general purpose computers or processors. The code modules may be stored in any type of computer-readable medium or other computer storage device. Some or all the methods may alternatively be embodied in specialized computer hardware. In addition, the components referred to herein may be implemented in hardware, software, firmware or a combination thereof.
- Conditional language such as, among others, “can,” “could,” “might” or “may,” unless specifically stated otherwise, are otherwise understood within the context as used in general to convey that certain embodiments include, while other embodiments do not include, certain features, elements and/or steps. Thus, such conditional language is not generally intended to imply that features, elements and/or steps are in any way required for one or more embodiments or that one or more embodiments necessarily include logic for deciding, with or without user input or prompting, whether these features, elements and/or steps are included or are to be performed in any particular embodiment.
- Any process descriptions, elements or blocks in the flow diagrams described herein and/or depicted in the attached figures should be understood as potentially representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or elements in the process. Alternate implementations are included within the scope of the embodiments described herein in which elements or functions may be deleted, executed out of order from that shown, or discussed, including substantially concurrently or in reverse order, depending on the functionality involved as would be understood by those skilled in the art.
- It should be emphasized that many variations and modifications may be made to the above-described embodiments, the elements of which are to be understood as being among other acceptable examples. All such modifications and variations are intended to be included herein within the scope of this disclosure and protected by the following claims.
Claims (20)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US16/188,891 US20190082202A1 (en) | 2014-09-05 | 2018-11-13 | Content delivery via private wireless network |
Applications Claiming Priority (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US14/479,033 US9497490B1 (en) | 2014-09-05 | 2014-09-05 | Content delivery via private wireless network |
US15/348,896 US20170064346A1 (en) | 2014-09-05 | 2016-11-10 | Content delivery via private wireless network |
US16/188,891 US20190082202A1 (en) | 2014-09-05 | 2018-11-13 | Content delivery via private wireless network |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US15/348,896 Continuation US20170064346A1 (en) | 2014-09-05 | 2016-11-10 | Content delivery via private wireless network |
Publications (1)
Publication Number | Publication Date |
---|---|
US20190082202A1 true US20190082202A1 (en) | 2019-03-14 |
Family
ID=57235174
Family Applications (3)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US14/479,033 Active US9497490B1 (en) | 2014-09-05 | 2014-09-05 | Content delivery via private wireless network |
US15/348,896 Abandoned US20170064346A1 (en) | 2014-09-05 | 2016-11-10 | Content delivery via private wireless network |
US16/188,891 Abandoned US20190082202A1 (en) | 2014-09-05 | 2018-11-13 | Content delivery via private wireless network |
Family Applications Before (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US14/479,033 Active US9497490B1 (en) | 2014-09-05 | 2014-09-05 | Content delivery via private wireless network |
US15/348,896 Abandoned US20170064346A1 (en) | 2014-09-05 | 2016-11-10 | Content delivery via private wireless network |
Country Status (1)
Country | Link |
---|---|
US (3) | US9497490B1 (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10534812B2 (en) * | 2014-12-16 | 2020-01-14 | The Board Of Trustees Of The University Of Alabama | Systems and methods for digital asset organization |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20120072845A1 (en) * | 2010-09-21 | 2012-03-22 | Avaya Inc. | System and method for classifying live media tags into types |
US20130254816A1 (en) * | 2012-03-21 | 2013-09-26 | Sony Corporation | Temporal video tagging and distribution |
US20130316837A1 (en) * | 2012-02-03 | 2013-11-28 | Charles Edward Coiner, JR. | Football play selection applications |
US20140294361A1 (en) * | 2013-04-02 | 2014-10-02 | International Business Machines Corporation | Clustering Crowdsourced Videos by Line-of-Sight |
US20150262617A1 (en) * | 2014-03-17 | 2015-09-17 | Clipcast Technologies LLC | Media clip creation and distribution systems, apparatus, and methods |
Family Cites Families (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20090182647A1 (en) * | 2008-01-15 | 2009-07-16 | Ebay Inc. | Systems and methods for enhancing product value metadata |
US9266017B1 (en) * | 2008-12-03 | 2016-02-23 | Electronic Arts Inc. | Virtual playbook with user controls |
US9009596B2 (en) * | 2011-11-21 | 2015-04-14 | Verizon Patent And Licensing Inc. | Methods and systems for presenting media content generated by attendees of a live event |
US9648357B2 (en) * | 2012-11-05 | 2017-05-09 | Ati Technologies Ulc | Method and device for providing a video stream for an object of interest |
US8782140B1 (en) | 2013-03-13 | 2014-07-15 | Greenfly Digital, LLC | Methods and system for distributing information via multiple forms of delivery services |
WO2014183034A1 (en) * | 2013-05-10 | 2014-11-13 | Uberfan, Llc | Event-related media management system |
US10933209B2 (en) * | 2013-11-01 | 2021-03-02 | Georama, Inc. | System to process data related to user interactions with and user feedback of a product while user finds, perceives, or uses the product |
-
2014
- 2014-09-05 US US14/479,033 patent/US9497490B1/en active Active
-
2016
- 2016-11-10 US US15/348,896 patent/US20170064346A1/en not_active Abandoned
-
2018
- 2018-11-13 US US16/188,891 patent/US20190082202A1/en not_active Abandoned
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20120072845A1 (en) * | 2010-09-21 | 2012-03-22 | Avaya Inc. | System and method for classifying live media tags into types |
US20130316837A1 (en) * | 2012-02-03 | 2013-11-28 | Charles Edward Coiner, JR. | Football play selection applications |
US20130254816A1 (en) * | 2012-03-21 | 2013-09-26 | Sony Corporation | Temporal video tagging and distribution |
US20140294361A1 (en) * | 2013-04-02 | 2014-10-02 | International Business Machines Corporation | Clustering Crowdsourced Videos by Line-of-Sight |
US20150262617A1 (en) * | 2014-03-17 | 2015-09-17 | Clipcast Technologies LLC | Media clip creation and distribution systems, apparatus, and methods |
Also Published As
Publication number | Publication date |
---|---|
US9497490B1 (en) | 2016-11-15 |
US20170064346A1 (en) | 2017-03-02 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11882231B1 (en) | Methods and systems for processing an ephemeral content message | |
KR102045978B1 (en) | Facial authentication method, device and computer storage | |
US11349801B2 (en) | Systems and methods for providing content | |
US20160261669A1 (en) | Generating a Website to Share Aggregated Content | |
US9858484B2 (en) | Systems and methods for determining video feature descriptors based on convolutional neural networks | |
US20150356121A1 (en) | Position location-enabled, event-based, photo sharing software and service | |
US20170024094A1 (en) | Interactive audience communication for events | |
CN107465886B (en) | Mediation method, mediation apparatus and system, and recording medium | |
TWI822762B (en) | Device presentation with real-time feedback | |
CN104980339B (en) | Sharing files method and device | |
US20170147883A1 (en) | Systems and methods for defining and analyzing video clusters based on video image frames | |
US20190190970A1 (en) | Systems and methods for providing device-based feedback | |
CN115053269A (en) | Method and apparatus for facial recognition on a user device | |
US10614116B2 (en) | Systems and methods for determining and providing event media and integrated content in connection with events associated with a social networking system | |
US20190082202A1 (en) | Content delivery via private wireless network | |
KR20120015811A (en) | System and method for producing digital storytelling-content | |
US10805367B2 (en) | Systems and methods for sharing content | |
TWI522939B (en) | Events integrating method andsystem | |
CN104462996A (en) | Method and system for achieving synergic forensic analysis on remote forensic target terminal | |
US20210294846A1 (en) | Systems and methods for automatically generating stitched media content | |
US20190208002A1 (en) | Systems and methods for sharing content | |
US20190313142A1 (en) | System and Method for Video Data Manipulation | |
Alhassan et al. | A forensic evidence recovery from mobile device applications | |
US20130144959A1 (en) | Using Text Summaries of Images to Conduct Bandwidth Sensitive Status Updates | |
CN109005210A (en) | The method and apparatus of information exchange |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
STPP | Information on status: patent application and granting procedure in general |
Free format text: APPLICATION DISPATCHED FROM PREEXAM, NOT YET DOCKETED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: FINAL REJECTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: FINAL REJECTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |