WO2015134724A1 - Method system and apparatus for team video capture - Google Patents

Method system and apparatus for team video capture Download PDF

Info

Publication number
WO2015134724A1
WO2015134724A1 PCT/US2015/018924 US2015018924W WO2015134724A1 WO 2015134724 A1 WO2015134724 A1 WO 2015134724A1 US 2015018924 W US2015018924 W US 2015018924W WO 2015134724 A1 WO2015134724 A1 WO 2015134724A1
Authority
WO
WIPO (PCT)
Prior art keywords
metadata
event
contextualized
video
game
Prior art date
Application number
PCT/US2015/018924
Other languages
French (fr)
Inventor
Dani Michael KERLUKE
Timothy D. Baker
Original Assignee
Double Blue Sports Analytics, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from PCT/US2014/067779 external-priority patent/WO2015081303A1/en
Application filed by Double Blue Sports Analytics, Inc. filed Critical Double Blue Sports Analytics, Inc.
Publication of WO2015134724A1 publication Critical patent/WO2015134724A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/70Labelling scene content, e.g. deriving syntactic or semantic representations
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/74Details of notification to user or communication with user or patient ; user input means
    • A61B5/742Details of notification to user or communication with user or patient ; user input means using visual displays
    • A61B5/743Displaying an image simultaneously with additional graphical information, e.g. symbols, charts, function plots
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • G06V20/42Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items of sport video content
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B27/00Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
    • G11B27/10Indexing; Addressing; Timing or synchronising; Measuring tape travel
    • G11B27/102Programmed access in sequence to addressed parts of tracks of operating record carriers
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B27/00Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
    • G11B27/10Indexing; Addressing; Timing or synchronising; Measuring tape travel
    • G11B27/34Indicating arrangements 
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/21805Source of audio or video content, e.g. local disk arrays enabling multiple viewpoints, e.g. using a plurality of cameras
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs
    • H04N21/2343Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
    • H04N21/234318Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements by decomposing into objects, e.g. MPEG-4 objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/45Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
    • H04N21/454Content or additional data filtering, e.g. blocking advertisements
    • H04N21/4545Input to filtering algorithms, e.g. filtering a region of the image
    • H04N21/45452Input to filtering algorithms, e.g. filtering a region of the image applied to an object-based stream, e.g. MPEG-4 streams

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computational Linguistics (AREA)
  • Databases & Information Systems (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Pathology (AREA)
  • Medical Informatics (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Molecular Biology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Software Systems (AREA)
  • Radiology & Medical Imaging (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The present invention provides improved methods, systems, and apparatus for automated video tagging of one or more video streams from one or more video capture devices during team and individual player participation in a sporting event, training or testing activity. Gesture-based touch user input devices and contextualized metadata displays expedite real-time capture, analysis, storage, and display of player and/or team performance data. Affordable and efficient capture of tagged video using contextualized metadata significantly aids coaching staffs with actionable data to make informed decisions on in-game strategy, and post-game team and individual athlete development. Aggregated tagged video with event metadata and analytics stored and accessed by cloud-based event performance data storage systems serve remote and portable devices for coaches, scouts, agents, spectators, and the media to analyze, assess, and report on the performance of both current and prospective athletes.

Description

METHOD SYSTEM AND APPARATUS FOR TEAM VIDEO CAPTURE
CROSS REFERENCE TO RELATED APPLICATIONS
[0001] This application claims benefit of priority of U.S. Provisional Patent Application 61/949,137 filed March 6, 2014, which is incorporated herein by reference.
COPYRIGHT NOTICE
[0002] A portion of the disclosure of this patent document contains material subject to copyright protection. The copyright owner has no objection to the facsimile reproduction of the patent document or the patent disclosure, as it appears in the United States Patent and Trademark Office patent files or records, but otherwise reserves all copyright, whatsoever, and including any displays of data, arrangements and/or graphic representations of data, which may be disclosed as static or interactive user interface displays herein.
FIELD OF THE INVENTION
[0003] The present invention relates to automated video tagging during sporting events, sports training and sports evaluation and testing activities, including sporting contests, camps, clinics and practices.
[0004] More particularly, the present invention relates to improved methods, systems, and apparatus for automated video tagging of one or more video streams from one or more video capture devices during the performance of single or multiple participants in a sporting event, training or testing activity, utilizing touch-input devices to expedite the capture, analysis, storage, and display, in real time, of player and/or team performance metrics and analytics through the use of user input gestures and interactive data displays containing contextualized metadata information.
BACKGROUND OF THE INVENTION
[0005] Every sport showcases distinct movements, sequences and human behavior related to the rules, regulations and objectives of the sport. However, there are no comprehensive systems that automatically tag and capture video of an athlete or multiple athletes in a team performance while in play, in a game or during practice, drilling and skills testing, and then combine the tagged video with their performance metrics and statistics.
[0006] Athletes, whether professional, amateur or youth, desire more performance and video data about their sports performance. Assemblage of metrics and analytics on athletic performance is vital to the development of the athlete so that he or she, along with coaches and parents, can review, evaluate, and improve performance during a game, over a season, and throughout his or her sports career. These metrics are also useful to scouts, agents and the media to assess prospective athletes or analyze and report sports performance.
[0007] The speed of some sports renders accurate and comprehensive recording of athlete and team activity by manual methods nearly impossible, and provides none of the advantages of real-time in game or immediate post-game review. Effective use of manually recorded game data typically involves hours of tedious additional post-game data input. Outside of professional and college and sports, most teams and individual athletes do not have access to expensive video capture systems or the personnel to live tag video sequences with performance data. This puts the team or athlete at a disadvantage of having to spend hours post game gathering and editing video clips for evaluation and analysis.
[0008] It is, therefore, desirable to provide a solution that addresses the lack of automated video tagging and related performance metrics and statistics, thereby providing expedited real-time data acquisition, recording, transmitting, and processing of video data and performance metrics during sporting events, testing activities, camps, clinics and practices.
[0009] Recent advances in the technology of wearable, compact and self-contained wireless movement and position sensors, heretofore unapplied in the manner of the present invention, enables further advantages in the capture, collection, analysis, and display of athletic performance video and data. Using advances in human kinetics measurement by applying advances in wireless inertial measurement unit (IMU) sensor enables automatic video tagging, which, as newly provided by the present invention, provides for efficient tagging of video for automated real-time data collection for immediate use in game-time coaching analysis and decision-making. Using the combined movement and position sensor data in conjunction with position sensors further enables performance metrics and analytics by comparison to previously captured performance data or similarly acquired expert performance data to further develop skills of the aspiring athlete.
[0010] In the areas of sport science and athlete analysis, there are no comprehensive systems that provide simplified and efficient capture of video sequences of sports participants' movements, shifts, skill performance, playing time, practice drills or testing activities in synchronization with the participants' accompanying real-time performance metrics, data and statistics. Current technology is limited to capturing and organizing video sequences. Additional personnel and equipment are required to input and synchronize the accompanying player and team performance metrics, statistics and data associated with video events.
[001 1] Present systems are cumbersome, expensive, and difficult to use, and provide no means for real-time, in-game and immediate, post-game feedback to coaches, players, parents, and spectators of the participants' individual or team performance analysis, metrics, and statistics. Expensive and cumbersome systems further discourage teams and individual athletes from compiling team and personal statistics throughout the season.
[0012] It is, therefore, also desirable to provide affordable, automated methods, systems, and apparatus to tag, capture and organize video with accompanying event/metadata through gesture acquisition on a touch-screen device to simplify and expedite real-time data acquisition during games, practices, testing and evaluation, significantly aiding coaching staffs with actionable data to make informed decisions on game strategy and team and individual athlete development
[0013] Other objectives of the present invention will be readily apparent from the summary and detailed description to follow.
BRIEF SUMMARY OF THE INVENTION
[0014] The present invention provides improved methods, systems, and apparatus for automated tagging of one or more video streams from one or more video capture devices during the performance of single or multiple participants in sporting events, training or testing activities, including sporting contests, camps, clinics and practices.
[0015] Automated video tagging as disclosed herein simplifies and expedites real-time data acquisition during games, practices, testing and evaluation significantly aiding coaching staffs with actionable data to make informed decisions for team and individual athlete development. Improved methods also provide coaches, scouts, agents and the media means to analyze, assess, and report on the performance of current athletes. The tagged video and event/metadata and analytics may be aggregated, stored, and transmitted to a cloud-based event performance data storage system for display on personal display devices to provide in- game, post-game and season analysis to coaches, scouts, agents, spectators, and the media, to analyze, assess, and report on the performance of both current and prospective athletes.
[0016] The methods, systems, and apparatus described herein utilize touch-input devices to expedite the capture, analysis, and display, in real time, of player and/or team performance data through the use of user input gestures and interactive data displays containing contextualized metadata information. However, the present invention may be used by or further improve and extend automated video tagging methods and systems that employ fully or partially automated tagging, where some or all of team and individual sports events and/or metadata are acquired automatically, with or without user input or intervention.
[0017] The present methods, systems, and apparatus capture and organize video sequences with accompanying event/metadata into contextualized containers. Video sequences and event/metadata in contextualized containers are then aggregated to display important performance metrics and analytics of the team and individual athlete for in-game, post-game and season analysis. The automatically tagged video and event/metadata and analytics are relayed to an event/data performance data capture system and displayed on a personal display device for the production and use for coaching, training analysis, media and/or spectator purposes.
BRIEF DESCRIPTION OF THE DRAWINGS
[0018] Embodiments of the present invention will now be described, by way of example only, with reference to the attached Figures, wherein:
[0019] FIG. 1 is a perspective view of two hockey teams taking a face-off and the apparatus to capture the event video sequence and accompanying metadata. In this embodiment the contextualized face-off information is pushed in real time to the coach on the bench for analysis.
[0020] FIG. 2 shows a block diagram of the face-off event applicable to a variety of applications with various embodiments.
[0021] FIG. 3 shows a flow chart exemplifying the process for the application to capture an event along the video timeline, embedded with metadata from the input of multiple gestures, applicable to a variety of applications with various embodiments.
[0022] FIG. 4 illustrates the method of capturing a face-off event along the video timeline and the embedded metadata acquired by multiple gesture input, applicable to a variety of applications with various embodiments.
[0023] FIG. 5 further exemplifies the method within the application to collect, organize and distribute aggregated events and metadata, filtered into contextualized data containers for in- game and post game analysis, applicable to a variety of applications with various embodiments.
[0024] FIG. 6 further delineates the contextualized face-off events and metadata from a game, season and athlete perspective, while illustrating the functionality of interactive summarized data charts; applicable to a variety of applications with various embodiments.
[0025] FIG. 7 illustrates the method of capturing a goal event along the video timeline and the embedded metadata acquired by multiple gesture input, applicable to a variety of applications with various embodiments.
[0026] FIG. 8 further exemplifies the method within the application to collect, organize and distribute aggregated events and metadata for goals, filtered into contextualized data containers for in-game and post game analysis, applicable to a variety of applications with various embodiments.
[0027] FIG. 9 further delineates the contextualized goal events and metadata from a game, season and athlete perspective, while illustrating the functionality of interactive summarized data charts; applicable to a variety of applications with various embodiments.
[0028] FIG. 10 continues to delineate the contextualized goal events and metadata from a game, season and athlete perspective, while illustrating the functionality of interactive summarized data charts; applicable to a variety of applications with various embodiments.
DETAILED DESCRIPTION
[0029] Generally, the present invention provides methods, systems, and apparatus for tagging video sequences during a sporting event, utilizing touch input devices to capture the real time related player and team statistics through the use of gestures and interactive data charts containing contextualized metadata and aggregated display mechanisms. With regard to the accompanying Figures, it is readily apparent that the present invention provides a method in which a mobile device can acquire, collect, store, organize, process, analyze, and export a teams and/or athletes tagged video sequences, events, performance metrics and statistics during a game, practice session, and/or testing or evaluation setting.
[0030] The methods, systems, and apparatus described herein utilize touch-input devices to expedite the capture, analysis, and display, in real time, of player and/or team performance data through the use of user input gestures and interactive data displays containing contextualized metadata information. It should be readily apparent to one skilled in the computing art that the mobile device may be any sort of processing apparatus including, but not limited to, touch-screen tablets, smartphones, notebook or mini-computing device, or the like and the computer may be any sort of processing apparatus including, but not limited to, a laptop computer, desktop computer, server, or the like. It should also be readily apparent, that any means for transmitting and receiving data, wired or wireless, in any suitable data transmission protocol, may be used to transmit and receive data among and between any of the devices and systems herein described, for the purposes described in the disclosed methods.
[0031] By example, FIG. 1 demonstrates a method, system and apparatus to capture, process, organize and export a tagged multiple-angle 14, 22 video sequence of one typical ice hockey event, the face-off 10 with its characteristic events and metadata, to the mobile device 12, and to be displayed 22 and stored 26 to the local device. The video and data is also wirelessly sent to the cloud 16, which may also be pushed remotely to a coach during a game 18, 20.
[0032] FIG. 1 , FIG. 2 and FIG. 3 show the wireless video cameras 14, 22 transmitting, simultaneously, wireless video streams to the touch input device 20 via WiFi 28, Bluetooth 30, or other suitable wireless protocol. In this example of an ice hockey face-off 10, the user captures on the touch device 20 several metadata identifiers embedded in the single face-off event through multiple gesture input on the display 22.
[0033] FIG. 4 exemplifies the process of capturing metadata identifiers in the face-off event described in FIG. 1 , FIG. 2 and FIG. 3 along the multiple angle video timeline of the game. By example, a contextualized gesture-based user interface for capturing an ice hockey game 32 is displayed. A face-off event is executed 10 and captured 34. The face-off button 34 is swiped to the left identifying the face-off was won 36. This metadata is captured and embedded into an initial face-off event. Subsequently, the rink diagram is tapped to identify where the face-off was taken 38 and metadata is captured and embedded into the initial face-off event. Following identifying the face-off location, a player headshot is tapped to identify which player took part in the face-off 40 and metadata is captured and embedded into the initial face-off event. Through this process the user has captured, organized, stored and displayed vital player and team metrics associated with a face-off event, which will provide actionable data for real-time in-game and post-game analysis.
[0034] In the present invention and in the particular example implementation of the invention within an ice hockey game, FIG. 5 further shows a method to capture multiple- layer, rich events utilizing touch and gesture technology. Events with embedded metadata 34, 36, 38, 40 create a mechanism to display and distribute meaningful information to the player, team and coach in real time. The method filters and organizes captured events/metadata into containers 42, 46, 50, 54 to contextualize the different layers of a hockey game and to provide essential and actionable information to the team, player and coach.
[0035] To further illustrate, FIG. 5 in the context of a face-off event 10 depicts the sequence and data 34, 36, 38, 40 as captured by the user 34 and distributed to each contextualized data container. Game container 42 and game perspective 44 provide important data relatable from the perspective of the current game, within the context of face- offs. Team container 46 filters the data to provide a perspective of the teams face-off percentages 48, which provide important actionable information for the team and coach. Overall team face-off performance and individual zone face-off performance within the current game and/or season can be displayed on the team perspective 48.
[0036] Further demonstrated by FIG. 5, athlete container 50 and athlete perspective 52 provide important data relatable from the perspective of each individual athlete for the current game and season, within the context of face-offs. Athlete perspective 52 displays the current game face-off percentages of the athlete, providing the coach instant actionable information on the success of this athlete, overall and by each zone. Direct action can be taken by the coach during the game and post-game using the athlete perspective to identify the instructional direction and/or training development necessary to improving future performance during the next practice session for this athlete.
[0037] Further shown in FIG. 5 coach container 54 and coach perspective 56 provide data relatable from the perspective of a coach for the current game and season. Coach perspective 56 provides vital actionable data and video to adjust in-game strategy and direction for development of the team and athletes during practices. The video sequences and the accompanying events/metadata are pushed to the cloud 58 for distribution to multiple devices 60 for use by coaches and athletes local and remote, for viewing on devices including but not limited to smartphones, tablets or desktop computers.
[0038] FIG. 6 continues to delineate FIG. 4 and FIG. 5 within the user interface contextualizing face-offs events. From the display of the game perspective 44, the user touches the face-off tag to display the corresponding video sequence 64 for review. From the team perspective 48, the display shows filtered and analyzed face-off statistics for the season in each zone 66, as well as summarized percentages in the defensive zone 68, neutral zone 70 and offensive zone 72. Touching on one of the face-off circles 76 retrieves all of the corresponding video sequences 78 for review 80. From the perspective of the athlete 52, the user can select between game 82 and season 84 views to provide face-off percentages of the game 86 and season 88. The interactive metadata chart 90 and 92 will display the specific percentages from each zone from a game or season.
[0039] FIG. 7 exemplifies the process described in FIG. 1 , FIG. 2 and FIG. 3 for a goal event along the multiple angle video timeline of the game. A contextualized gesture- based user interface for capturing an ice hockey game 96 is displayed. A goal event is executed and captured 98, subsequently the goal button is swiped to the right identifying the event as a goal against 100, the metadata captured and embedded into an initial goal event. Subsequently, the rink diagram is tapped to identify where the goal was scored 102, the metadata again captured and embedded into the initial goal event. In addition, dropping a pin automatically records where the goal was shot from a grade A or grade B area 104. This metadata is captured and embedded into the goal event. Subsequently, a player headshot is tapped to identify which player scored the goal 106, with the metadata captured and embedded into the goal event. Through this process we have captured, organized, stored and displayed vital player and team metrics embedded into the goal event, which will provide actionable data for in-game and post-game analysis.
[0040] To further illustrate, FIG. 7 and FIG 8 in the context of a goal event, the sequence and data 98, 100, 102, 104, 106 is captured by the user and distributed to each contextualized data container. Game container 42 and goal event input 98 provides important data relatable from the perspective of the current game, within the context of goals. Team container 46 and team perspective 108 filters the data and provides a perspective of the teams goals (for and against), which will provide important actionable information to be displayed for the team and coach. Athlete container 50 and athlete perspective 1 10 provides important data relatable from the perspective of each individual athlete for the current game and season. Athlete perspective 1 10 displays the goals scored by this athlete during the current game and the season. This provides the coach instant actionable information on the production of this athlete, within the context of goals. Direct action can be taken by the coach during the game and help to identify direction and/or specific development necessary during the next practice session for this athlete. Coach container 54 and coach perspective 1 12 provides important data relatable from the perspective of a coach for the current game and season. Coach perspective 1 12 displays vital actionable data and video to adjust in-game strategy and direction for development of the team and its athletes during practices. The video sequences and the accompanying events/metadata can be pushed to the cloud 58 for distribution to multiple devices 60 for use by coaches and athletes local and remote, for viewing on devices including but not limited to smartphones, tablets or desktop computers.
[0041] FIG. 9 continues to delineate FIG. 7 and FIG. 8 within the interface contextualizing goal events. From the perspective of the team 108, the user is displayed summarized, interactive graphical elements for the user to further explore the corresponding video sequences embedded with the metadata associated with goal charts 1 14 and 1 16. Goal user control 1 18 allows the user to toggle between the teams (goals for) and (goals against) statistics. Goal value display 120 displays the season total for (goals for or against), continued by categorizing goals from the (grade A or B) goal area total values display 122. Period user control 124 allows the user to add the goal locations from each period or toggle between periods of the game. The interactive element of 126 allows the user touching the shot chart to enable a magnifying circle that will capture and display the corresponding video sequences within the magnifying circle. When the finger is removed from the touch device the captured video sequences will be displayed 128 and 130 for review. The interactive function of goal chart 1 14 allows the user 132 touching the zone goal percentages to display the corresponding video sequences within each zone 134 and 136 for review.
[0042] FIG. 10 continues to delineate FIG. 7 and FIG. 8 within the interface contextualizing goal events. By selection of an athlete indicator 1 10, the user is displayed summarized, interactive graphical elements to further explore the corresponding video sequences embedded with the contextualized metadata for the selected athlete 1 10. Game performance perspective 138 and season performance perspective 140 allows the user to toggle between game and season views of the summarized data. Game perspective 142 displays the athlete's goal statistics for the current game. Season perspective 144 displays the athlete's goal statistics for the season. In the game perspective 142, the athletes captured events are displayed for review. Touching goal event 146 displays the corresponding video sequence and location of the shot 148 on the summarized and interactive statistic chart 150. It also allows the user to toggle between ice locations for the goals 152, shots 154, face-offs 156 and turnovers 158.
[0043] The present invention may be practiced for the benefit of any sporting activity where automated video tagging simplifies and expedites real-time data acquisition during games, practices, testing and evaluation to the significant aid of coaching staffs using actionable data to make informed decisions for team and individual athlete development. Improved methods, systems, and apparatus disclosed herein provide coaches, scouts, agents and sports media of all sports means to analyze, assess, and report on the performance of current athletes. The tagged video and event/metadata and analytics may be aggregated, stored, and transmitted to a cloud-based event performance data storage system for display on personal display devices to provide in-game, post-game and season analysis to coaches, scouts, agents, spectators, and the media of all sports to analyze, assess, and report on the performance of both current and prospective athletes of that sport.
[0044] The foregoing description of automated video tagging with contextualized metadata for face-off events and goals events are by way of example only. The invention encompasses identification and characterization of all events associated with team and/or individual sports activity in the context of games, practices, training, testing or evaluation. The present invention is not limited to application to the sport of hockey; the inventive methods, systems, and apparatus and the techniques embodied therein may similarly be practiced for the benefit of teams and individual participants of the sports of soccer, field hockey, lacrosse, football, baseball, polo, and the like, as well as within individual sports such as tennis, speed and figure skating, golf, swimming, and the like.
[0045] In summary, it should be understood that the present invention is implemented with software and hardware, or a combination thereof, and thereby provides inventive methods, systems, and apparatus for expedited collection of video sequences, inventive methods and apparatus to aggregate video, data and performance metrics, and inventive methods and apparatus for compiling and organizing contextualized events with embedded metadata for analysis.
[0046] While the foregoing written description of the invention enables one of ordinary skill to make and use what is considered presently to be the best mode thereof, those of ordinary skill will understand and appreciate the existence of variations, combinations, and equivalents of the specific embodiments and examples herein. The invention should therefore not be limited by the above described embodiments and examples, but by all embodiments and for all application examples within the scope and spirit of the invention as claimed. The above-described embodiments of the present invention are intended to be examples only. Alterations, modifications and variations may be effected to the particular embodiments by those of skill in the art without departing from the scope of the invention, which is defined solely by the claims appended hereto.

Claims

CLAIMS l/we claim:
1. A method for automated video tagging of one or more video streams from one or more video capture or storage devices during team and/or individual player participation in a sporting event, training or testing activity, comprising:
presenting to a user, contextualized metadata on a touch-input, gesture-based interactive display;
acquiring from the user, an indication of at least one event occurring in the one or more captured video streams;
acquiring from the user, by gesture on the touch-input, interactive display, one or more event metadata associated with each of the at least one event;
storing the one or more metadata to contextualized metadata containers; and displaying to users on one or more interactive display devices, contextualized metadata perspective views of the acquired and the stored event metadata from the associated contextualized metadata containers.
2. A system for automated video tagging of one or more video streams acquired during team and individual player participation in a sporting event, training or testing activity, comprising:
one or more, single or multi-angle video capture devices for capturing and transmitting in real-time, or for capturing and storing for later retrieval, the one or more acquired video streams;
at least one touch-input interactive user input device for acquiring an indication of at least one event occurring in the one or more captured video streams and one or more event metadata associated with each of the at least one event;
a memory for storing the one or more metadata to contextualized metadata containers; and
one or more display devices for displaying contextualized metadata perspective views of the acquired and the stored event metadata from the associated contextualized metadata containers.
3. An apparatus for automated video tagging of one or more video streams from one or more video capture devices during team and individual player participation in a sporting event, training or testing activity, comprising:
a processor and a memory configured to: present to a user, contextualized metadata on a touch-input, gesture-based interactive display;
acquire from the user, an indication of at least one event occurring in the one or more captured video streams;
acquire from the user, by gesture on the touch-input, interactive display, one or more event metadata associated with each of the at least one event;
store the one or more metadata to contextualized metadata containers; and display to users on one or more interactive display devices, contextualized metadata perspective views of the acquired and the stored event metadata from the associated contextualized metadata containers.
4. A computer-readable medium, comprising:
one or more video streams from one or more video capture devices acquired during team and individual player participation in a sporting contest, training or testing activity;
contextualized metadata associated with a sporting contest, training, or testing activity occurring in the one or more captured video streams;
an indication of at least one event occurring in the one or more video streams; and
one or more event metadata associated with each of the at least one event;
wherein the one or more event metadata are associated in
contextualized metadata containers for retrieval and display in contextualized metadata perspective views.
PCT/US2015/018924 2014-03-06 2015-03-05 Method system and apparatus for team video capture WO2015134724A1 (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US201461949137P 2014-03-06 2014-03-06
US61/949,137 2014-03-06
PCT/US2014/067779 WO2015081303A1 (en) 2013-11-26 2014-11-26 Automated video tagging with aggregated performance metrics
USPCT/US2014/067779 2014-11-26

Publications (1)

Publication Number Publication Date
WO2015134724A1 true WO2015134724A1 (en) 2015-09-11

Family

ID=54055866

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2015/018924 WO2015134724A1 (en) 2014-03-06 2015-03-05 Method system and apparatus for team video capture

Country Status (1)

Country Link
WO (1) WO2015134724A1 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017079241A1 (en) * 2015-11-02 2017-05-11 Vieu Labs, Inc. Improved highlight-based movie navigation, editing and sharing
CN107360050A (en) * 2016-05-10 2017-11-17 杭州海康威视数字技术股份有限公司 Video cloud memory node automatic performance method of testing and its device
US10223449B2 (en) 2016-03-15 2019-03-05 Microsoft Technology Licensing, Llc Contextual search for gaming video
US10520919B2 (en) 2017-05-01 2019-12-31 General Electric Company Systems and methods for receiving sensor data for an operating additive manufacturing machine and mapping the sensor data with process data which controls the operation of the machine

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080184121A1 (en) * 2007-01-31 2008-07-31 Kulas Charles J Authoring tool for providing tags associated with items in a video playback
US20120030263A1 (en) * 2010-07-30 2012-02-02 Avaya Inc. System and method for aggregating and presenting tags

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080184121A1 (en) * 2007-01-31 2008-07-31 Kulas Charles J Authoring tool for providing tags associated with items in a video playback
US20120030263A1 (en) * 2010-07-30 2012-02-02 Avaya Inc. System and method for aggregating and presenting tags

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017079241A1 (en) * 2015-11-02 2017-05-11 Vieu Labs, Inc. Improved highlight-based movie navigation, editing and sharing
US10223449B2 (en) 2016-03-15 2019-03-05 Microsoft Technology Licensing, Llc Contextual search for gaming video
CN107360050A (en) * 2016-05-10 2017-11-17 杭州海康威视数字技术股份有限公司 Video cloud memory node automatic performance method of testing and its device
CN107360050B (en) * 2016-05-10 2020-08-18 杭州海康威视数字技术股份有限公司 Automatic testing method and device for performance of video cloud storage node
US10520919B2 (en) 2017-05-01 2019-12-31 General Electric Company Systems and methods for receiving sensor data for an operating additive manufacturing machine and mapping the sensor data with process data which controls the operation of the machine

Similar Documents

Publication Publication Date Title
US11717737B2 (en) Athletic training system and method
US11887368B2 (en) Methods, systems and software programs for enhanced sports analytics and applications
US10372992B2 (en) Classification of activity derived from multiple locations
US10269390B2 (en) Game video processing systems and methods
US10121065B2 (en) Athletic attribute determinations from image data
US20160098941A1 (en) Methods and apparatus for goaltending applications including collecting performance metrics, video and sensor analysis
US9610491B2 (en) Playbook processor
WO2015081303A1 (en) Automated video tagging with aggregated performance metrics
US20150202510A1 (en) System for training sport mechanics
WO2015134724A1 (en) Method system and apparatus for team video capture
US20200188754A1 (en) System for training lacrosse mechanics using sensors
JP7300668B2 (en) Play analysis device and play analysis method
Verlin et al. PoloTrac: A Water Polo Tracking and Advanced Statistics Application.

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15757859

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 15757859

Country of ref document: EP

Kind code of ref document: A1