WO2016053522A1 - Systèmes et procédés pour créer et améliorer des vidéos - Google Patents

Systèmes et procédés pour créer et améliorer des vidéos Download PDF

Info

Publication number
WO2016053522A1
WO2016053522A1 PCT/US2015/047251 US2015047251W WO2016053522A1 WO 2016053522 A1 WO2016053522 A1 WO 2016053522A1 US 2015047251 W US2015047251 W US 2015047251W WO 2016053522 A1 WO2016053522 A1 WO 2016053522A1
Authority
WO
WIPO (PCT)
Prior art keywords
video
source
actor
time period
frame
Prior art date
Application number
PCT/US2015/047251
Other languages
English (en)
Inventor
Anatole Lokshin
David J. LOKSHIN
Original Assignee
Alpinereplay, Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US14/583,012 external-priority patent/US10008237B2/en
Application filed by Alpinereplay, Inc filed Critical Alpinereplay, Inc
Publication of WO2016053522A1 publication Critical patent/WO2016053522A1/fr

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/21805Source of audio or video content, e.g. local disk arrays enabling multiple viewpoints, e.g. using a plurality of cameras
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B27/00Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
    • G11B27/02Editing, e.g. varying the order of information signals recorded on, or reproduced from, record carriers
    • G11B27/031Electronic editing of digitised analogue information signals, e.g. audio or video signals
    • G11B27/034Electronic editing of digitised analogue information signals, e.g. audio or video signals on discs
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B27/00Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
    • G11B27/10Indexing; Addressing; Timing or synchronising; Measuring tape travel
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/85Assembly of content; Generation of multimedia applications
    • H04N21/854Content authoring

Definitions

  • extreme sports also referred to as “action sports” or “adventure sports”
  • sports such as snowboarding, skateboarding, free skiing, surfboarding, skydiving, wingsuit flying, bicycle motocross (BMX), and others are becoming (or are currently) mainstream sports.
  • BMX bicycle motocross
  • Such sports are increasingly being covered by various media organizations and some competitions (such as the X-Games) are devoted solely to extreme sports.
  • fans of traditional and extreme sporting events often record videos of such events using video cameras, their smart phones, or other video-capturing devices. This provides a large number of possible video sources for various events, which can be further supplemented by video drones and other systems.
  • Figures 1 and 1 A are flow diagrams showing exemplary processes according to various embodiments.
  • Figure 2 is a block diagram of an exemplary system according to various aspects
  • Figures 3, 4, 5, 6A, 6B, and 7 are exemplary graphs according to various aspects of the present disclosure.
  • Figures 8-12 are additional exemplary graphs according to various aspects of the present disclosure.
  • Figure 13 illustrates and example of multiple video sources capturing video of an actor according to various aspects of the present disclosure.
  • Figure 14 is a flow diagram showing an exemplary process according to various embodiments.
  • Figure 15 is a flow diagram showing another exemplary process according to various embodiments.
  • Embodiments of the present disclosure help to automatically generate video selected from multiple video sources using intelligent sensor processing, thereby providing viewers with a unique and rich viewing experience quickly and inexpensively.
  • a computer-implemented method comprises receiving, by a computer system, first video of an actor from a first source, wherein the first video is captured during a first time period; receiving, by the computer system, second video of the actor from a second source, wherein the second video is captured during a second time period, and wherein the first time period and second time period at least partially overlap; receiving, by a computer system, sensor data related to motion by the actor over one or more of the first time period and the second time period; determining, by the computer system and based on the sensor data, a plurality of motion characteristics; identifying, based on the plurality of motion characteristics, an athletic maneuver associated with the motion; creating, by the computer system, a combined video that displays the actor performing the athletic maneuver and includes at least one frame of the first video from the first source and at least one frame of the second video from the second source.
  • the present disclosure includes various methods, apparatuses (including computer systems) that perform such methods, and computer readable media containing instructions that, when executed by computing systems, cause the computing systems to perform such methods.
  • references to one or an embodiment in the present disclosure are not necessarily references to the same embodiment; and, such references mean at least one.
  • FIG. 1 depicts an exemplary process according to various embodiments of the present disclosure.
  • method 100 includes receiving sensor data (105) related to motion by an actor over a period of time and determining motion characteristics based on the sensor data (110).
  • Method 100 further includes receiving input related to a definition of an athletic maneuver (115), associating one or more motion characteristics with the definition (120), and storing the athletic maneuver definition and associated motion characteristics in a database (125).
  • Method 100 further includes identifying one or more athletic maneuvers based on the determined motion characteristics (130), determining a level of similarity between the determined motion characteristics and motion characteristics associated with the identified athletic maneuver(s) (135), and generating an alert (140) in response to the similarity level being below a predetermined threshold.
  • Method 100 also includes combining or overlaying
  • method 100 may be implemented (in whole or in part, and in any desired order) by software operating on a computer system, such as the exemplary computer system 200 depicted in Figure 2.
  • Embodiments of the present disclosure may receive sensor data (105) directly or indirectly from any number and type of sensors, such as an accelerometer, a gyroscope, a magnetometer, a Hall effect sensor, a global positioning system, an ultrasonic sensor, an optical sensor; a barometric sensor; and combinations thereof.
  • Information from different sensors may be used together or separately to determine various motion characteristics for the actor and/or the actor's equipment.
  • an "actor" performing an athletic maneuver may refer to any human (e.g., skier, snowboarder, cyclist, diver, athlete, etc.) as well as to sporting equipment associated with, or controlled by, a human.
  • Such sporting equipment may include, for example, a vehicle (such as a bicycle, boat, automobile, motorcycle, etc.), a skateboard, skis, a snowboard, a parachute, and other equipment or devices.
  • Hall effect sensors may be used to monitor the speed of a stunt bicyclist's wheels before, during, and after a jump, while data from one set of accelerometers and gyroscopes can monitor flips and rotations performed by the bicyclist and a second set of accelerometers and gyroscopes can monitor flips and rotations of the bicycle itself.
  • Other sensors such as an optical sensor (e.g., a proximity sensor) or global positioning system can be used to help determine actor's position with regards to the ground, ramp, other actors, and other objects.
  • Various embodiments may utilize data from sensors imbedded in or attached to an actor's clothing, skin, equipment, or surroundings (e.g., in a ramp used by an actor to perform jumps).
  • the sensor data may be received in any suitable manner, such as wirelessly from a data collection system coupled to an actor and/or an actor's equipment.
  • the time interval when a maneuver or "trick" is performed can be determined by identifying and measuring a jump by monitoring and analyzing characteristic signature of the gyro sensors using fuzzy logic as described in U.S. patent application number 13/612,470, the contents of which are incorporated by reference herein.
  • Figure 7 compares a signal from accelerometers and gyro sensors recorded by a device that was attached to a skateboard during maneuvers performed by a skateboarder. To aid in separating the signals, the accelerometer norm was shifted up by 1000 mili-g.
  • Figure 7 shows that while accelerometer signal is very noisy due to the board vibration, the gyro signal is much better correlated with the trick time period.
  • Embodiments of the present disclosure can be particularly effective in identifying and characterizing athletic maneuvers for extreme sports based on objective data. Additionally, various embodiments may be used in conjunction with various other sports and activities, such as baseball, basketball, hockey, soccer, and football. For example, embodiments of the present disclosure may be configured to receive data from sensors coupled to a baseball player's bat, as well as from sensors attached to the player's uniform and/or embedded in a baseball. In one such embodiment, information regarding a player hitting a baseball, such as the angle of the player's swing, the velocity of the bat, and the force at which the baseball is hit can be provided to various users, such as spectators of a baseball game or trainers seeking to optimize the player's swing.
  • embodiments of the present disclosure can provide information to enhance a spectator's experience (e.g., displaying the force applied to a baseball from a 420-foot home run) as well as to help players, trainers, and coaches improve an athlete's performance based on objective information.
  • the motion characteristics may include any desired information regarding the motion of an actor or equipment used by an actor, such as position, velocity, acceleration, orientation, rotation, translation, deformation, and changes to and combinations thereof.
  • Embodiments of the present disclosure can determine multiple motion characteristics during a period of time that includes an athletic maneuver performed by an actor. For example, the orientation of a snowboarder executing a complex series of flips and rotations during jump can be monitored at various points or times (e.g., each millisecond) within a time period starting prior to the jump and ending after the jump.
  • Motion characteristics may include any number of different values, descriptions, and other information characterizing the movement (or lack thereof) of an actor and/or the actor's equipment.
  • a motion characteristic associated with an athletic maneuver may include one or more of: an orientation of the actor at a time within the time period (e.g., prior to, during, or after a maneuver), an angle of rotation around a local axis of the actor, a direction of rotation of the actor around a local axis of the actor, an angle of rotation around an absolute axis perpendicular to a plane of motion of the actor, a direction of rotation round an absolute axis perpendicular to a plane of motion of the actor, and combinations thereof.
  • Motion characteristics may be determined in any suitable manner.
  • motion characteristics can be determined based on two different frames of reference: an absolute frame of reference that is independent of the actor and independent of the actor's motion, and a local frame of reference that is associated with (i.e., connected to, and moves with) the actor.
  • Embodiments of the present disclosure can characterize and measure a variety of different athletic maneuvers based on analyzing a combination of motion characteristics measured relative to the absolute and local frames of reference.
  • Embodiments of the present disclosure may also utilize information regarding the placement or location of various sensors in determining various motion characteristics. For example, determining rotation about an axis along the actor's body may be based on sensors (e.g., accelerometers and gyroscopes) being positioned collinearly with each other, such that sensors along one of the three-dimensional axes (x, y, and z) are the same.
  • Embodiments of the present disclosure may also utilize calibration information from the sensors, such as calibration of various sensors performed during manufacturing of the sensors or installation of the sensors in an actor's clothing, equipment, etc.
  • Various embodiments of the present disclosure may also be configured to communicate with various sensors (or control systems coupled thereto) to calibrate the sensors. Such calibration may be performed in any suitable manner. In one embodiment, for example, calibration of one or more inertial sensors attached to an actor and/or the actor's equipment can be calibrated when the sensors indicate the actor and/or equipment is not moving.
  • Embodiments of the present disclosure translate sensor observations to identify "tricks" using such nomenclature.
  • Embodiments of the present disclosure can identify key motions for each trick, detect and measure all such motions, and then identify a trick and its quality or level of difficulty by comparing the detected motions with the nominal motions required for each trick.
  • trick While not exhaustive, the most common characteristics that define different tricks are: the orientation of the actor prior to the trick (forward facing, backward facing, left side facing, right side facing), the leading edge of the trick (e.g., starting from the front or end of the user's board), rotation around an absolute axis that is horizontal and perpendicular to the direction of motion during the trick (flip axis), and rotation around an axis associated with the body of the actor (self rotation axis).
  • a rotation angle around any axis does not have to be a full rotation but could be a "back and forth" swing at some particular angle, (e.g., a "Shifty" trick).
  • determining the orientation of an actor (or the actor's equipment) at various points during a time period includes determining an angle of self-rotation for the actor and determining an angle of flip for the actor. For example, an actor (such as a skateboarder) performing a jump may be monitored using three gyroscopes and three
  • L dot is a time derivative of L(t)
  • is angular velocity of the body, measured by (for example) the gyroscopic sensors.
  • L(0) i.e., the orientation of the actor before the jump
  • the orientation of the actor immediately after the jump can be determined, namely L(t max) where t max is a time after the actor lands.
  • the orientation of the actor can be determined in any desired manner, such as based on a direction of a velocity vector calculated from the sensor data (e.g., from a GPS sensor) and the orientation of the actor calculated from other sensor data (e.g., from a magnetometer).
  • determining L(t max) may be preferable where a landing shock (i.e., a large acceleration after the jump) in the opposite directed with respect to gravity is measurable. Given the orientation of the actor after the jump, the equation thus becomes:
  • cp acos ((g-a_shock)/
  • means that there may be some error associated with the final quaternion L(t_max ), however this approximation may still be sufficient from practical point of view where the absolute orientation of the sportsman is not needed for calculation of turns during the jump.
  • orientation of the sportsman can be determined solving the following Cauchy problem:
  • a local vertical axis may be determined to (for example) calculate an angle of self-rotation of the actor during the jump.
  • a local vertical axis may be determined in a variety of different ways, including based on an average of sensor data from one or more accelerometers prior to the athletic maneuver, an average of sensor data from one or more accelerometers after the athletic maneuver, sensor data from one or more accelerometers associated with a portion of the athletic maneuver (such as a landing performed after the maneuver), sensor data from one or more magnetometers (e.g., in conjunction with the nominal orientation of a magnetic vector relative to the vertical at the location of the event), and combinations thereof.
  • Figure 3 depicts an exemplary graph of sensor data measured from three accelerometers that shows the shock of an impact from an actor landing after a jump.
  • the horizontal graph axis depicts time in milliseconds and the vertical axis depicts acceleration in m/s A 2.
  • the three accelerometers correspond to acceleration along the x, y, and z sensor axes.
  • the angle of self-rotation for the actor may be determined in any suitable manner, including by calculating a path for each of a plurality of unit vectors in a local frame of reference that is associated with the actor, the plurality of unit vectors being orthogonal to a local vertical vector for the actor. The angle of self-rotation for the actor may then be selected as the largest rotation angle among such unit vectors.
  • the angle of flip for the actor may be calculated in any desired manner, including by determining the motion of a vertical vector in a global frame of reference that is associated with the actor, identifying a plane of movement for the unit vertical vector, calculating a projection of the vertical vector on the plane, and selecting the angle of flip for the actor as the angle of the arc traveled by such projection on the plane.
  • Figure 4 is an exemplary graph depicting self-rotation over time.
  • sensor data from three gyroscopic sensors corresponding to the x, y, and z planes as in Figure 3
  • the vertical axis depicts rotation speed in rad/sec.
  • the angle of self-rotation calculated for this data is about 363 degrees, and the angle of flip is calculated at less than 1 degree.
  • Figure 5 is an exemplary graph depicting an angle of flip over time.
  • sensor data from a gyroscopic sensor flipped i.e., rotated about a nearly horizontal axis
  • Figure 6A depicts the trajectory of the vector connected with the sensor that coincides with the vertical vector at the end of the rotation
  • Figure 6B depicts the projection of the trajectory on the plane of rotation (nearly round in this example).
  • the resulting angle of flip rotation is about 358 degrees and the angle of self-rotation is about 38 degrees in this example.
  • Definitions for any desired athletic maneuvers can be received (115), associated with one or more motion characteristics (120), and stored in a database for future reference and retrieval (125).
  • a definition for an athletic maneuver may apply to any movement or group of movements by an actor, the actor's equipment, or a combination thereof.
  • a back flip combined with a side flip performed by a snowboarder is often referred to as a "wildcat.”
  • a definition for a wildcat maneuver may thus include the name (or aliases) of the maneuver, a textual description of the maneuver, and the motion characteristic(s) associated with the maneuver.
  • the motion characteristics associated with the wildcat may include indicators of the different axes of rotation by the actor's body and the rotation angles around these axes.
  • Other information such as a typical range of forces exerted on the actor's snowboard or parts of the actor's body and a range of time the snowboarder is airborne may be associated with the definition to help identify future wildcat jumps by comparing measured motion characteristics to the wildcat jump definition in the database.
  • the definition for an athletic maneuver may include any number of complex movements by an actor and/or the actor's equipment.
  • sensor data from sensors attached to the actor can be analyzed in conjunction with sensor data from the actor's equipment (such as a skateboarder).
  • rotations, flips, and other movement by the actor can be analyzed together with rotations and flips of the skateboard to identify all movements performed by the actor and provide a complete and accurate
  • Embodiments of the present disclosure can thus be particularly effective in characterizing and identifying maneuvers that are complex and/or fast, therefore making it challenging for spectators and judges to identify all the movements involved.
  • a definition for an athletic maneuver may include, or be associated with, any other desired information. For example, statistics (including motion characteristics) for particular athletes who perform a maneuver may be linked to the definition of the maneuver in a database, allowing users of the systems and methods of the present disclosure to compare the manner in which various athletes perform the maneuver. Other information, such as video of the maneuver being performed, may likewise be included in, or linked to, the definition.
  • One or more determined motion characteristics can be compared to the motion characteristics associated with a known athletic maneuvers in a database to identify the maneuver (130) associated with the determined motion characteristics.
  • the motion characteristics determined for an unknown maneuver may be compared to motion characteristics for known maneuvers in any suitable manner. For example, characteristics describing an actor's velocity, angle of self-rotation, angle of flip, and change in orientation during a time period can be compared to a relational database storing motion characteristics associated with the known maneuvers.
  • Known motion characteristics in the database may be represented in nominal values and/or in ranges, reflecting that different actors may have different physical characteristics (e.g., height, weight), may have different equipment, may perform the same maneuver somewhat differently, and/or may perform a maneuver under various other conditions (e.g., with different types of ramps).
  • the database may also specify the quality of the maneuver that is associated with different parameter values. For example, a full rotation of 360 degrees during a jump may be valued in 10 points, while a partial rotation of only 350 dg (from jump start to landing) may be valued only at 9 points.
  • the quality of a maneuver may be determined according to any desired factors, such as the difficulty of the maneuver.
  • a level of similarity between a determined set of motion characteristics and a known nominal set of motion characteristics may be determined (135).
  • the level of similarity may be determined for each of a plurality of motion characteristics.
  • an overall level of similarity may be determined for an entire maneuver.
  • the level(s) of similarity may be compared to various threshold values, and an alert generated if one or more similarity levels fail to meet a threshold value.
  • embodiments of the present disclosure can identify new (i.e., undefined) maneuvers and help administrators and users of the systems and methods described herein to identify errors in the values or associations of stored motion characteristics and modify them appropriately.
  • Figure 1 A depicts an exemplary method for populating a database with information regarding various athletic maneuvers.
  • method 150 includes receiving sensor data (154) from a performed trick (152), and determining motion characteristics (156) and motion sequences (158) based on the received data.
  • the motion characteristics and/or motion sequences may be averaged over other nominal cases (160) of the same trick, and a set of trick characteristics stored in the database (162).
  • Embodiments of the present disclosure may combine or overlay information identified and/or measured for an athletic maneuver (145) with any desired form of media.
  • information regarding an athletic maneuver can be combined or overlaid onto video taken of the athletic maneuver, thus providing near-real-time information on a maneuver to spectators or judges.
  • Such an overlay can be synchronized in any desired manner, including by using a time measured from a global positioning system, a common communication network, or other sensor or common clock operating in conjunction with embodiments of the present disclosure.
  • Jumps, and tricks performed during such jumps are often important aspects of many action sports. Accordingly, the detection and measurement of such jumps is likewise an important part of any system intended to quantify action sports.
  • the typical approach for many conventional systems in detecting a jump or other athletic maneuver is to use the signal from an accelerometer. This approach assumes that during free fall the total nominal acceleration that can be measured by an acceleration sensor is zero. However, the practical application of this approach can be problematic.
  • FIG. 8 shows a graph of a signal from an accelerometer recorded from a snowboarder jump.
  • Graph 800 shows the norm of acceleration during a snowboarder jump.
  • the circles on the graph 800 mark the start and end of the jump.
  • Figure 9 shows a graph of the norm of acceleration during a "kickflip" skateboard maneuver, with the circles on graph 900 showing the start and end of the kickflip.
  • a gyroscope also referred to herein as a "gyro"
  • the measurements of a gyro are much less sensitive to vibration.
  • a gyro signal does not have a strong external signal as compared to gravity for an accelerometer.
  • the gyro signal is relatively constant.
  • many the landings performed in many athletic maneuvers is usually done on the front or back end of the sport equipment (snowboard, skateboard, motorcycle, etc) which leads to a sharp "shock" response on the gyro sensors.
  • Figure 11 shows a plot 1100 of signals from three gyroscopes coupled to a skateboard during an "Ollie" maneuver, a maneuver where a skateboarder performs a jump with his/her skateboard.
  • the plot clearly shows a typical lift of the board (rotation around short board axis) at the beginning of a jump and a typical landing "gyro shock.”
  • Figure 12 shows a graph 1200 of a derivative of a gyro signal during a snowboarding jump with a double cork.
  • the start of the double cork between .5 and 1 seconds
  • the end of the double cork between 2.5 and 3 seconds
  • sensor data from one or more gyroscopes may be used in conjunction with method 100 depicted in Figure 1 and described in more detail above.
  • the receipt of sensor data related to the motion of an actor and/or the actor's equipment (105) may include sensor data from a gyroscope, and determining the plurality of motion characteristics (110) may be based on a rate of change in the data from the gyroscope.
  • the gyroscope sensor data can be used to help identify a variety of different motion characteristics and athletic maneuvers, such as the start of a jump and/or a landing.
  • Data from gyro sensors may be used identify athletic maneuvers (130) alone or in conjunction with data from other types of sensors, such as an accelerometer.
  • a fuzzy logic analysis may be performed on the data from the gyroscope and the data from the accelerometer.
  • the initial conditions for the integration of this equation can be determined by averaging acceleration and magnetic vectors before or after the jump during some time period when gyro signal is small. Additionally, the orientation of an actor and/or the actor's equipment during the jump may be calculated to help can be used to filter out false alarm (false positive) for jump detection. In some embodiments, for example, the orientation of an actor and/or the actor's equipment may be determined based at least partially on a direction of a magnetic vector calculated using data from a magnetometer.
  • Embodiments of the disclosure may also be configured to automatically generate and transmit reports, statistics, and/or analyses based on information related to various athletic maneuvers. These may be provided in real-time or near-real-time to judges, spectators, social media outlets, broadcasting entities, websites, and other systems and entities. The computed jump and trick parameters and characteristics can be superimposed on a time-synchronized video to enhance the viewing experience and to provide more detailed information to spectators, coaches, and judges.
  • Figure 2 is a block diagram of system which may be used in conjunction with various embodiments. While Figure 2 illustrates various components of a computer system, it is not intended to represent any particular architecture or manner of interconnecting the components. Other systems that have fewer or more components may also be used.
  • the system 200 includes a computer system 210 comprising a processor 212, memory 214, and user interface 216.
  • Computer system 210 may include any number of different processors, memory components, and user interface components, and may interact with any other desired systems and devices in conjunction with embodiments of the present disclosure.
  • the system 200 may include (or interact with) one or more databases (not shown) to allow the storage and retrieval of information such as the results of a jump/trick analysis, data from sensors attached to an actor or his/her equipment, trick definitions, and other data.
  • databases not shown
  • the functionality of the computer system 210 may be implemented through the processor 212 executing computer-readable instructions stored in the memory 214 of the system 210.
  • the memory 214 may store any computer-readable instructions and data, including software applications, applets, and embedded operating code.
  • the functionality of the system 210 or other system and devices operating in conjunction with embodiments of the present disclosure may also be implemented through various hardware components storing machine-readable instructions, such as application-specific integrated circuits (ASICs), field-programmable gate arrays (FPGAs) and/or complex programmable logic devices (CPLDs).
  • ASICs application-specific integrated circuits
  • FPGAs field-programmable gate arrays
  • CPLDs complex programmable logic devices
  • Systems according to aspects of certain embodiments may operate in conjunction with any desired combination of software and/or hardware components.
  • the processor 212 retrieves and executes instructions stored in the memory 214 to control the operation of the system 210. Any type of processor, such as an integrated circuit microprocessor, microcontroller, and/or digital signal processor (DSP), can be used in conjunction with embodiments of the present disclosure.
  • DSP digital signal processor
  • a memory 214 operating in conjunction with embodiments of the disclosure may include any combination of different memory storage devices, such as hard drives, random access memory (RAM), read only memory (ROM), FLASH memory, or any other type of volatile and/or nonvolatile memory.
  • Data (such as athletic maneuver definitions and associated motion characteristics) can be stored in the memory 214 in any desired manner, such as in a relational database.
  • the system 210 includes a user interface 216 may include any number of input devices (not shown) to receive commands, data, and other suitable input from a user such as input regarding the definitions of athletic maneuvers.
  • the user interface 216 may also include any number of output devices (not shown) to provides the user with data, notifications, and other information.
  • Typical I/O devices may include mice, keyboards, modems, network interfaces, printers, scanners, video cameras and other devices.
  • the system 210 may communicate with one or more sensor devices 220, as well as other systems and devices in any desired manner, including via network 230.
  • the sensor devices 220 may include, or be coupled to, one or more control systems (not shown) through which the system 210 communicates with, or the system 210 may communicate directly with the sensors 220.
  • the system 210 may be, include, or operate in conjunction with, a server, a laptop computer, a desktop computer, a mobile subscriber communication device, a mobile phone, a personal digital assistant (PDA), a tablet computer, an electronic book or book reader, a digital camera, a video camera, a video game console, and/or any other suitable computing device.
  • a server a laptop computer, a desktop computer, a mobile subscriber communication device, a mobile phone, a personal digital assistant (PDA), a tablet computer, an electronic book or book reader, a digital camera, a video camera, a video game console, and/or any other suitable computing device.
  • PDA personal digital assistant
  • the network 230 may include any electronic communications system or method. Communication among components operating in conjunction with embodiments of the present disclosure may be performed using any suitable communication method, such as, for example, a telephone network, an extranet, an intranet, the Internet, point of interaction device (point of sale device, personal digital assistant (e.g., iPhone®, Palm Pilot®, Blackberry®), cellular phone, kiosk, etc.), online communications, satellite communications, off-line communications, wireless communications, transponder communications, local area network (LAN), wide area network (WAN), virtual private network (VPN), networked or linked devices, keyboard, mouse and/or any suitable communication or data input modality.
  • Systems and devices of the present disclosure may utilize TCP/IP communications protocols as well as IPX, Appletalk, IP-6, NetBIOS, OSI, any tunneling protocol (e.g. IPsec, SSH), or any number of existing or future protocols.
  • Embodiments of the present disclosure may be used to collect, search, and compile new videos based on video collected from different sources, such as from multiple sources collecting video of an athletic event.
  • the different sources of video e.g., individual spectators, a stationary camera, a point-of-view (POV) camera, and/or a drone
  • the different sources of video may be in different locations relative to a sporting event or athletic maneuvers being performed.
  • one video source may have an exceptional vantage point (e.g., of a skier's run) compared to other sources for part of an athletic maneuver, while another video source may have a better view of the maneuver at a different time.
  • the quality and/or relevance of the video collected from each source may also vary. For example, different sources may be using different equipment, may be located different distances from an athletic event, may utilize different optical zoom values, or may benefit (or suffer) from various environmental conditions (e.g., sunlight, shadows, precipitation, etc.).
  • different sources may be using different equipment, may be located different distances from an athletic event, may utilize different optical zoom values, or may benefit (or suffer) from various environmental conditions (e.g., sunlight, shadows, precipitation, etc.).
  • Figure 13 illustrates an example of two video sources (cameras 1310 and 1320) providing video of an actor 1330 to the computing system 210 via network 230.
  • Computing system 210 and network 230 are described above with reference to Figure 2.
  • cameras 1310 and 1320 are in different locations, and thus have different viewing perspectives of the actor 1430 as he/she performs various athletic maneuvers.
  • Cameras 1310 and 1320 may provide pre-recorded video data to the computing device 230, or such video may be provided in real-time or near-real-time.
  • the cameras 1310 and 1320 may also provide video directly to the computing system 210 (bypassing the network) or via one or more additional computing devices (such as a laptop, personal computer, smartphone, etc.).
  • Cameras 1310 and 1320 may be (or include) a stationary camera, a camera embedded in a smartphone or other mobile device of a spectator, and/or a camera installed on a mobile drone or other mobile vehicle. Embodiments of the present disclosure may also receive video from a camera carried by the actor 1330 (e.g.
  • sensor data e.g., from sensors mounted on the actor and/or the actor's clothing or equipment
  • the actor 1330 performs some athletic maneuver (such as a ski jump or slalom) during action time period Ta: Tal ⁇ t ⁇ Ta2.
  • some athletic maneuver such as a ski jump or slalom
  • the actor 1330 moves along a trajectory Sa, represented in three dimensions as: [Xa(t), Ya(t), Za(t)] and with orientation Wa represented by pitch, roll, and yaw as: [Pa(t),Ra(t),Ya(t)].
  • sensors coupled to the actor may be used to determine various motion characteristics and such characteristics used to identify an athletic maneuver.
  • the video may be captured by professional camera operators (e.g., hired to cover the event), spectators using smartphones or other handheld cameras, automated or semi automated devices (such as mounted cameras and/or drones) as well as from other sources.
  • video capture may be performed asynchronously (i.e., one or more video sources are not capturing video at the same time). Additionally, such video may be captured without coordination between the sources (e.g., two spectators at opposite ends of a stadium capture video of the same athletic trick).
  • any of the various video sources may be stationary or moving during the video capture of the actor 1330.
  • the camera 1310 may be stationary while camera 1320 may be moving along a trajectory Sk in three-dimensional space represented as: (Xk(t),Yk(t),Zk(t)], and has orientation and zoom values WZk:
  • the video captured from cameras 1310 and 1320 (and other video sources), as well as information on the actor's motion (e.g., Sa, Wa, and Ea) can be provided to computing device 210 and used to generate a combined video that automatically selects optimal video from among the different sources, thus quickly, efficiently, and cost-effectively providing a superior visual experience of an event.
  • Trajectory and sensory information e.g., recorded by sensors attached to the actor or the actor's equipment may also be provided to the computing device 210.
  • Figure 14 depicts an exemplary process that may be performed in accordance with various systems of the present disclosure, including the system depicted in Figure 13.
  • exemplary method 1400 includes receiving video of an actor from multiple sources (1405), receiving sensor data pertaining to one or more of the video sources (1410), determining video source characteristics (1415), and synchronizing video from multiple sources (1420).
  • the method 1400 further includes receiving sensor data pertaining to the actor being videoed (1425), determining motion characteristics for the actor based on the actor's sensor data (1430), and identifying an athletic maneuver performed by the actor (1435).
  • Video from one or more sources is combined (1440), information is overlayed on the combined video (1445), and the combined video is presented to one or more users (1450).
  • the combined video may also be tagged and stored for later retrieval (1455).
  • Video may be received (1405) from any number and type of source.
  • video may be captured by a combination of professionally-operated cameras and cameras operated by spectators at an event.
  • Such video may be captured and delivered to a system implementing the functionality of method 1400 (such as computing device 210) in real-time or near-real time.
  • Such video may also be stored (e.g., in a database or other data store) and retrieved by the computing device 210 for processing.
  • Video from different sources may be captured at different times, and such times may overlap. For example, a first video source might capture the first minute of an actor's two- minute downhill ski run, while a second video source might capture the ski run from 30 seconds into the run until the end of the run.
  • Data from sensors associated with a video source may be received (1410) and used to determine characteristics of the video source (1415).
  • a video source may be equipped with a global positioning system (GPS) that provides information on the position of a video source (such as a digital video camera), and inertial and/or magnetic sensors that provide information on the orientation of the video source as it captures video.
  • GPS global positioning system
  • the video source may also provide information such as its zoom factor and a time indicator, as well as information on a user or author of the video, information on the actor, information on the sporting event the actor is participating in, and the like.
  • sensor data and other information related to a video source may be provided in real-time or near-real-time, and can be stored and retrieved from a database or other data store.
  • Information regarding a video source may be used, for example, to determine: a trajectory of a moving video source, an orientation of a video source, a zoom factor of the video source (e.g., 2x, 3x, 4x, etc.), and/or a viewing direction between the video source and the actor being videoed. As described below with respect to step 1440, these characteristics can be used to select the best frames of video for the combined video.
  • Video from multiple sources is time-synchronized (1420).
  • the video from a source may include a set of time tags which can be used to synchronize video from two or more sources based on a time frame common to the sources.
  • the common time frame may be based, for example, on time information from a network, a global positioning satellite, or other system in communication with the two or more video sources.
  • video from a first source that records the entire 30 seconds of a 30-second event i.e., Tl, T2, ... T30
  • video from a second source that includes video of an event starting from ten seconds into the event until the end (i.e., T10, Ti l, ...
  • the video from Tl to T9 in the first video may be used as the best (i.e., only) video in a combined video showing the event, while the video frames from each of the two sources between T10 through T30 can be compared to identify the best video to include in the combined video.
  • Method 1400 further includes receiving sensor data pertaining to an actor (1425), determining one or more motion characteristics based on the actor's sensor data (1430), and identifying an athletic maneuver based on the one more motion characteristics (1435). These steps are described in more detail above with regards to steps 105-130 in Figure 1. Any of the sensors and motion characteristics may likewise be employed in conjunction with the method 1400 shown in Figure 14.
  • embodiments of the present disclosure can automatically identify an athletic maneuver based on sensor data from the actor (e.g., sensors mounted on the actor, the actor's clothing, and/or the actor's equipment) and identify the time period during which athletic maneuver takes place in order to automatically identify the frames of video from different sources that may have captured the athletic maneuver, and select the best video of the maneuver from all the possible sources.
  • sensor data pertaining to an actor may include time tags, just as the video of the actor includes time tags.
  • embodiments of the disclosure may synchronize the time tags on sets of sensor data from two or more different sensors based on a common time frame, such as time information from a network or global positioning satellite. The sensor data from different sensors measuring an athletic event may thus be correlated with video of the athletic event by synchronizing the time tags on the data from each of the sensors and each of the video sources.
  • video sources 1310 and 1320 both capture video of actor 1330 performing a sixty-second ski run between time Tl - T60 .
  • the video frame sequences from video sources 1310 and 1320 between T28 and T30 can be analyzed to identify the best video sequence to include in the combined video.
  • video of an athletic maneuver may be automatically modified (e.g., sped up or slowed down) in response to identifying a particular athletic maneuver. For example, when a particular snowboarding trick (such as a jump or a flip) is identified, the video of that trick may be slowed down to provide the viewer with a better viewing experience of an otherwise fast- moving event.
  • Other video modifications such as overlaying computer-generated graphics or other information on the video may be performed.
  • graphics may include, for example, graphics showing a snow board moving and rotating, as well as the motion of an avatar representing the actor, in order to help viewers understand an athletic maneuver performed by the actor.
  • Method 1400 further includes creating combined video (1440) using video from one or more video sources. Even in cases where a single video source is in the best position to view an athletic event and produces the highest quality video, embodiments of the present disclosure can be used to quickly analyze video from other possible sources to verify the single source is the best choice to provide the video.
  • Characteristics of each video source can be used to identify the best video sequences for a combined video.
  • characteristics of a video source such as the source's trajectory (if moving), orientation, zoom factor, distance to the actor being videoed, viewing direction between the video source and the actor, the quality of the video source's equipment, characteristics of the video frames captured by each source (e.g., brightness, color, frame rate, signal-to-noise ratio, and/or any other video quality metric), a type of athletic maneuver being performed by the actor, and/or other characteristics to assign a frame quality score to one or more frames of video from the source.
  • characteristics of a video source such as the source's trajectory (if moving), orientation, zoom factor, distance to the actor being videoed, viewing direction between the video source and the actor, the quality of the video source's equipment, characteristics of the video frames captured by each source (e.g., brightness, color, frame rate, signal-to-noise ratio, and/or any other video quality metric), a type of athletic maneuver being performed
  • the frame quality score can then be compared to the frame quality score of video from another source over the same time period to determine which frame(s) should be included in the combined video.
  • the frame quality score can be based on a single factor or on multiple factors, including any combination of video source characteristics.
  • the frame quality score may be assigned by a computer (e.g., according to an algorithm or script) as well as by a human operator.
  • a frame quality score can be assigned to a group of frames based on, for example, the average score for each individual frame.
  • a sequence of frames from a first source may be included in the combined video even if they are of lower quality than a sequence of frames from another source based on frames from the first source being previously or subsequently selected for the combined video (thus favoring a seamless display of video over rapid cuts between sources).
  • embodiments of the present disclosure can use the characteristics determined for each video source to identify the video frames to include in the combined video that will provide the best overall experience for a user.
  • frames from different sources may be displayed
  • the combined video may include one or more frames from the first video source and one or more frames from the second video source, captured at the same synchronized time and displayed simultaneously.
  • a variety of information may be overlaid (1445) on the combined video, including information from the video source characteristics, the determined motion characteristics, sensor data from the actor and/or the video source, and other information.
  • Information may be overlaid and synchronized with the video based on time measured from a common time reference as described above.
  • videos may be captured and stored in a database or other data store for later retrieval, processing, and
  • the combined video may be presented (1450) to one or more users via a display device or in any other suitable manner.
  • the combined video may be presented in a split-screen format, whereby synchronized video from two or more sources is simultaneously displayed on the same screen.
  • the system need not select only a single set of frames for inclusion in the combined video, rather multiple sets of frames from multiple sources can be included in the combined video.
  • the combined video may comprise a single view of the actor for a portion of the video, and display multiple views (e.g., the best 2, 3, 4, etc. frame sequences) of a particularly significant athletic event (e.g., a difficult trick performed by the actor) identified by the actor sensor data.
  • a particularly significant athletic event e.g., a difficult trick performed by the actor
  • the combined video may be tagged and/or stored (1455) for later retrieval and use.
  • Video may be tagged with any number of tags containing any desired information related to the athletic maneuver, such as the name of the athlete, the name of the athletic maneuver, a rating associated with the performance of the athletic maneuver (e.g., given by judges of the maneuver), as well as various statistics of the athletic maneuver (e.g., the height of a jump, the rotation angle of a flip, and/or the maximum speed attained by an actor during a maneuver).
  • a tagged video (with or without overlay) can be stored in a database or other data store so that it can be retrieved by searching on any of the tags, such as "find all surfing video at Huntington Beach Pier with air reverse higher than 3 feet.”
  • method 1500 displays an exemplary process for retrieving videos and creating one or more combined videos.
  • a system receives a request to view a particular activity (1505), such as one or more athletic maneuvers performed by an actor.
  • the activity viewing request may include any desired information describing the activity, such as a date, location, time, type of athletic maneuver, name of the actor performing the activity, location where the activity takes place, and other information.
  • the system may prompt the user for additional information, such as to select from among a list of possible activities that match the user's search criteria.
  • the time and position (or ranges of each) is determined (1510).
  • One or more videos capturing the activity are selected (1515) and quality scores assigned (1520) thereto.
  • a combined video depicting the requested activity may be created (1525) based on the selected videos and their respective quality scores.
  • the determination and assignment of quality scores, as well as the creation of combined video may be performed in the same manner described above with reference to Fig. 14.
  • a machine readable medium can be used to store software and data which when executed by a data processing system causes the system to perform various methods.
  • the executable software and data may be stored in various places including for example ROM, volatile RAM, non-volatile memory and/or cache. Portions of this software and/or data may be stored in any one of these storage devices.
  • the data and instructions can be obtained from centralized servers or peer to peer networks. Different portions of the data and instructions can be obtained from different centralized servers and/or peer to peer networks at different times and in different communication sessions or in a same communication session.
  • the data and instructions can be obtained in entirety prior to the execution of the applications. Alternatively, portions of the data and instructions can be obtained dynamically, just in time, when needed for execution. Thus, it is not required that the data and instructions be on a machine readable medium in entirety at a particular instance of time.
  • Examples of computer-readable media include but are not limited to recordable and non-recordable type media such as volatile and non- volatile memory devices, read only memory (ROM), random access memory (RAM), flash memory devices, floppy and other removable disks, magnetic disk storage media, optical storage media (e.g., Compact Disk Read-Only Memory (CD ROMS), Digital Versatile Disks (DVDs), etc.), among others.
  • the computer- readable media may store the instructions.
  • hardwired circuitry may be used in combination with software instructions to implement the techniques.
  • the techniques are neither limited to any specific combination of hardware circuitry and software nor to any particular source for the instructions executed by the data processing system.
  • the various system components discussed herein may include one or more of the following: a host server or other computing systems including a processor for processing digital data; a memory coupled to the processor for storing digital data; an input digitizer coupled to the processor for inputting digital data; an application program stored in the memory and accessible by the processor for directing processing of digital data by the processor; a display device coupled to the processor and memory for displaying information derived from digital data processed by the processor; and a plurality of databases.
  • Various databases used herein may include: shipping data, package data, and/or any data useful in the operation of the system.
  • Various functionality may be performed via a web browser and/or application interfacing utilizing a web browser.
  • Such browser applications may comprise Internet browsing software installed within a computing unit or a system to perform various functions.
  • These computing units or systems may take the form of a computer or set of computers, and any type of computing device or systems may be used, including laptops, notebooks, tablets, hand held computers, personal digital assistants, set-top boxes, workstations, computer-servers, main frame computers, mini-computers, PC servers, network sets of computers, personal computers and tablet computers, such as iPads, iMACs, and MacBooks, kiosks, terminals, point of sale (POS) devices and/or terminals, televisions, or any other device capable of receiving data over a network.
  • Various embodiments may utilize Microsoft Internet Explorer, Mozilla Firefox, Google Chrome, Apple Safari, Opera, or any other of the myriad software packages available for browsing the internet.
  • Various embodiments may operate in conjunction with any suitable operating system (e.g., Windows NT, 95/98/2000/CE/Mobile/, Windows 7/8, OS2, UNIX, Linux, Solaris, MacOS, PalmOS, etc.) as well as various conventional support software and drivers typically associated with computers.
  • Various embodiments may include any suitable personal computer, network computer, workstation, personal digital assistant, cellular phone, smart phone, minicomputer, mainframe or the like.
  • Embodiments may implement security protocols, such as Secure Sockets Layer (SSL), Transport Layer Security (TLS), and Secure Shell (SSH).
  • SSL Secure Sockets Layer
  • TLS Transport Layer Security
  • SSH Secure Shell
  • Embodiments may implement any desired application layer protocol, including http, https, ftp, and sftp.
  • the various system components may be independently, separately or collectively suitably coupled to a network via data links which includes, for example, a connection to an Internet Service Provider (ISP) over the local loop as is typically used in connection with standard modem communication, cable modem, satellite networks, ISDN, Digital Subscriber Line (DSL), or various wireless communication methods. It is noted that embodiments of the present disclosure may operate in conjunction with any suitable type of network, such as an interactive television (ITV) network.
  • ISP Internet Service Provider
  • ITV interactive television
  • Cloud or “Cloud computing” includes a model for enabling convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction.
  • Cloud computing may include location-independent computing, whereby shared servers provide resources, software, and data to computers and other devices on demand.
  • Various embodiments may be used in conjunction with web services, utility computing, pervasive and individualized computing, security and identity solutions, autonomic computing, cloud computing, commodity computing, mobility and wireless solutions, open source, biometrics, grid computing and/or mesh computing.
  • Any databases discussed herein may include relational, hierarchical, graphical, or object-oriented structure and/or any other database configurations. Moreover, the databases may be organized in any suitable manner, for example, as data tables or lookup tables. Each record may be a single file, a series of files, a linked series of data fields or any other data structure. Association of certain data may be accomplished through any desired data association technique such as those known or practiced in the art. For example, the association may be accomplished either manually or automatically.
  • Any databases, systems, devices, servers or other components of the system may be located at a single location or at multiple locations, wherein each database or system includes any of various suitable security features, such as firewalls, access codes, encryption, decryption, compression, decompression, and/or the like.
  • Encryption may be performed by way of any of the techniques now available in the art or which may become available— e.g., Twofish, Blowfish, RSA, El Gamal, Schorr signature, DSA, PGP, PKI, and symmetric and asymmetric cryptosystems.
  • Embodiments may connect to the Internet or an intranet using standard dial-up, cable, DSL or any other Internet protocol known in the art. Transactions may pass through a firewall in order to prevent unauthorized access from users of other networks.
  • the computers discussed herein may provide a suitable website or other Internet- based graphical user interface which is accessible by users.
  • the Microsoft Internet Information Server (IIS), Microsoft Transaction Server (MTS), and Microsoft SQL Server may be used in conjunction with the Microsoft operating system, Microsoft NT web server software, a Microsoft SQL Server database system, and a Microsoft Commerce Server.
  • components such as Access or Microsoft SQL Server, Oracle, Sybase, Informix MySQL, Interbase, etc., may be used to provide an Active Data Object (ADO) compliant database management system.
  • ADO Active Data Object
  • an Apache web server can be used in conjunction with a Linux operating system, a MySQL database, and the Perl, PHP, and/or Python programming languages.
  • Any of the communications, inputs, storage, databases or displays discussed herein may be facilitated through a website having web pages.
  • the term "web page" as it is used herein is not meant to limit the type of documents and applications that might be used to interact with the user.
  • a typical website might include, in addition to standard HTML documents, various forms, Java applets, JavaScript, active server pages (ASP), common gateway interface scripts (CGI), extensible markup language (XML), dynamic HTML, cascading style sheets (CSS), AJAX (Asynchronous Javascript And XML), helper applications, plug-ins, and the like.
  • a server may include a web service that receives a request from a web server, the request including a URL and an IP address. The web server retrieves the appropriate web pages and sends the data or applications for the web pages to the IP address.
  • Web services are applications that are capable of interacting with other applications over a communications means, such as the Internet.
  • Various embodiments may employ any desired number of methods for displaying data within a browser-based document.
  • data may be represented as standard text or within a fixed list, scrollable list, drop-down list, editable text field, fixed text field, pop-up window, and the like.
  • embodiments may utilize any desired number of methods for modifying data in a web page such as, for example, free text entry using a keyboard, selection of menu items, check boxes, option boxes, and the like.
  • the exemplary systems and methods illustrated herein may be described in terms of functional block components, screen shots, optional selections and various processing steps. It should be appreciated that such functional blocks may be realized by any number of hardware and/or software components configured to perform the specified functions.
  • the system may employ various integrated circuit components, e.g., memory elements, processing elements, logic elements, look-up tables, and the like, which may carry out a variety of functions under the control of one or more microprocessors or other control devices.
  • the software elements of the system may be implemented with any programming or scripting language such as C, C++, C#, Java, JavaScript, VBScript, Macromedia Cold Fusion, COBOL, Microsoft Active Server Pages, assembly, PERL, PHP, AWK, Python, Visual Basic, SQL Stored Procedures, PL/SQL, any UNIX shell script, and extensible markup language (XML) with the various algorithms being implemented with any combination of data structures, objects, processes, routines or other programming elements.
  • programming or scripting language such as C, C++, C#, Java, JavaScript, VBScript, Macromedia Cold Fusion, COBOL, Microsoft Active Server Pages, assembly, PERL, PHP, AWK, Python, Visual Basic, SQL Stored Procedures, PL/SQL, any
  • system may employ any number of conventional techniques for data transmission, signaling, data processing, network control, and the like. Still further, the system could be used to detect or prevent security issues with a client-side scripting language, such as JavaScript, VBScript or the like.
  • client-side scripting language such as JavaScript, VBScript or the like.
  • any portion of the system or a module may take the form of a processing apparatus executing code, an internet based embodiment, an entirely hardware embodiment, or an embodiment combining aspects of the internet, software and hardware.
  • the system may take the form of a computer program product on a computer-readable storage medium having computer-readable program code means embodied in the storage medium. Any suitable computer-readable storage medium may be utilized, including hard disks, CD-ROM, optical storage devices, magnetic storage devices, and/or the like.
  • programmable data processing apparatus create means for implementing the functions specified in the flowchart block or blocks.
  • These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer- readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart block or blocks.
  • the computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer-implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart block or blocks.
  • non-transitory is to be understood to remove only propagating transitory signals per se from the claim scope and does not relinquish rights to all standard computer- readable media that are not only propagating transitory signals per se. Stated another way, the meaning of the term “non-transitory computer-readable medium” should be construed to exclude only those types of transitory computer-readable media which were found in In Re Nuijten to fall outside the scope of patentable subject matter under 35 U.S.C. ⁇ 101.
  • the disclosure includes a method, it is contemplated that it may be embodied as computer program instructions on a tangible computer-readable carrier, such as a magnetic or optical memory or a magnetic or optical disk.
  • a tangible computer-readable carrier such as a magnetic or optical memory or a magnetic or optical disk.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Databases & Information Systems (AREA)
  • Signal Processing (AREA)
  • User Interface Of Digital Computer (AREA)
  • Processing Or Creating Images (AREA)

Abstract

Des modes de réalisation de la présente invention aident à générer automatiquement une vidéo sélectionnée à partir d'une pluralité de sources de vidéo par un traitement de capteurs intelligents, ce qui permet de procurer aux utilisateurs une expérience de visionnement riche et unique de manière rapide et économique.
PCT/US2015/047251 2014-09-29 2015-08-27 Systèmes et procédés pour créer et améliorer des vidéos WO2016053522A1 (fr)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US201462056868P 2014-09-29 2014-09-29
US62/056,868 2014-09-29
US14/583,012 2014-12-24
US14/583,012 US10008237B2 (en) 2012-09-12 2014-12-24 Systems and methods for creating and enhancing videos

Publications (1)

Publication Number Publication Date
WO2016053522A1 true WO2016053522A1 (fr) 2016-04-07

Family

ID=55631236

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2015/047251 WO2016053522A1 (fr) 2014-09-29 2015-08-27 Systèmes et procédés pour créer et améliorer des vidéos

Country Status (1)

Country Link
WO (1) WO2016053522A1 (fr)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10728443B1 (en) 2019-03-27 2020-07-28 On Time Staffing Inc. Automatic camera angle switching to create combined audiovisual file
US10963841B2 (en) 2019-03-27 2021-03-30 On Time Staffing Inc. Employment candidate empathy scoring system
US11023735B1 (en) 2020-04-02 2021-06-01 On Time Staffing, Inc. Automatic versioning of video presentations
US11113515B2 (en) 2016-05-17 2021-09-07 Sony Corporation Information processing device and information processing method
US11127232B2 (en) 2019-11-26 2021-09-21 On Time Staffing Inc. Multi-camera, multi-sensor panel data extraction system and method
US11144882B1 (en) 2020-09-18 2021-10-12 On Time Staffing Inc. Systems and methods for evaluating actions over a computer network and establishing live network connections
US11423071B1 (en) 2021-08-31 2022-08-23 On Time Staffing, Inc. Candidate data ranking method using previously selected candidate data
US11727040B2 (en) 2021-08-06 2023-08-15 On Time Staffing, Inc. Monitoring third-party forum contributions to improve searching through time-to-live data assignments
US11907652B2 (en) 2022-06-02 2024-02-20 On Time Staffing, Inc. User interface and systems for document creation

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050223799A1 (en) * 2004-03-31 2005-10-13 Brian Murphy System and method for motion capture and analysis
US20090063097A1 (en) * 1994-11-21 2009-03-05 Vock Curtis A Pressure sensing systems for sports, and associated methods
JP2012523900A (ja) * 2009-04-16 2012-10-11 ナイキ インターナショナル リミテッド アスレチック能力評価システム
US20140028855A1 (en) * 1999-05-11 2014-01-30 Timothy R. Pryor Camera based interaction and instruction
US20140257744A1 (en) * 2012-09-12 2014-09-11 Alpinereplay, Inc. Systems and methods for synchronized display of athletic maneuvers

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090063097A1 (en) * 1994-11-21 2009-03-05 Vock Curtis A Pressure sensing systems for sports, and associated methods
US20140028855A1 (en) * 1999-05-11 2014-01-30 Timothy R. Pryor Camera based interaction and instruction
US20050223799A1 (en) * 2004-03-31 2005-10-13 Brian Murphy System and method for motion capture and analysis
JP2012523900A (ja) * 2009-04-16 2012-10-11 ナイキ インターナショナル リミテッド アスレチック能力評価システム
US20140257744A1 (en) * 2012-09-12 2014-09-11 Alpinereplay, Inc. Systems and methods for synchronized display of athletic maneuvers

Cited By (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11113515B2 (en) 2016-05-17 2021-09-07 Sony Corporation Information processing device and information processing method
US11457140B2 (en) 2019-03-27 2022-09-27 On Time Staffing Inc. Automatic camera angle switching in response to low noise audio to create combined audiovisual file
US10963841B2 (en) 2019-03-27 2021-03-30 On Time Staffing Inc. Employment candidate empathy scoring system
US11961044B2 (en) 2019-03-27 2024-04-16 On Time Staffing, Inc. Behavioral data analysis and scoring system
US10728443B1 (en) 2019-03-27 2020-07-28 On Time Staffing Inc. Automatic camera angle switching to create combined audiovisual file
US11863858B2 (en) 2019-03-27 2024-01-02 On Time Staffing Inc. Automatic camera angle switching in response to low noise audio to create combined audiovisual file
US11127232B2 (en) 2019-11-26 2021-09-21 On Time Staffing Inc. Multi-camera, multi-sensor panel data extraction system and method
US11783645B2 (en) 2019-11-26 2023-10-10 On Time Staffing Inc. Multi-camera, multi-sensor panel data extraction system and method
US11184578B2 (en) 2020-04-02 2021-11-23 On Time Staffing, Inc. Audio and video recording and streaming in a three-computer booth
US11636678B2 (en) 2020-04-02 2023-04-25 On Time Staffing Inc. Audio and video recording and streaming in a three-computer booth
US11861904B2 (en) 2020-04-02 2024-01-02 On Time Staffing, Inc. Automatic versioning of video presentations
US11023735B1 (en) 2020-04-02 2021-06-01 On Time Staffing, Inc. Automatic versioning of video presentations
US11720859B2 (en) 2020-09-18 2023-08-08 On Time Staffing Inc. Systems and methods for evaluating actions over a computer network and establishing live network connections
US11144882B1 (en) 2020-09-18 2021-10-12 On Time Staffing Inc. Systems and methods for evaluating actions over a computer network and establishing live network connections
US11727040B2 (en) 2021-08-06 2023-08-15 On Time Staffing, Inc. Monitoring third-party forum contributions to improve searching through time-to-live data assignments
US11966429B2 (en) 2021-08-06 2024-04-23 On Time Staffing Inc. Monitoring third-party forum contributions to improve searching through time-to-live data assignments
US11423071B1 (en) 2021-08-31 2022-08-23 On Time Staffing, Inc. Candidate data ranking method using previously selected candidate data
US11907652B2 (en) 2022-06-02 2024-02-20 On Time Staffing, Inc. User interface and systems for document creation

Similar Documents

Publication Publication Date Title
US10008237B2 (en) Systems and methods for creating and enhancing videos
US10213137B2 (en) Systems and methods for synchronized display of athletic maneuvers
US10548514B2 (en) Systems and methods for identifying and characterizing athletic maneuvers
US10408857B2 (en) Use of gyro sensors for identifying athletic maneuvers
WO2016053522A1 (fr) Systèmes et procédés pour créer et améliorer des vidéos
JP7432499B2 (ja) 時間ベースでアスレチック活動を測定し表示するシステムおよび方法
US20160225410A1 (en) Action camera content management system
US10412467B2 (en) Personalized live media content
CN112753228A (zh) 用于生成媒体内容的技术
US20170312574A1 (en) Information processing device, information processing method, and program
JP2018504802A5 (fr)
US20200164247A1 (en) Observation-based break prediction for sporting events
US20130293783A1 (en) Motion vector based comparison of moving objects
KR20210022279A (ko) 사진 및 비디오 자동 촬영 및 편집을 위한 모션 센서 기반 접근 방법 및 장치
Le Sage et al. A wireless sensor system for monitoring the performance of a swimmer’s tumble turn
Perego et al. Wearable device for swim assessment: a new ecologic approach for communication and analysis

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15847526

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 15847526

Country of ref document: EP

Kind code of ref document: A1