CN113599788B - System and method for monitoring athlete performance during a sporting event - Google Patents

System and method for monitoring athlete performance during a sporting event Download PDF

Info

Publication number
CN113599788B
CN113599788B CN202110901640.XA CN202110901640A CN113599788B CN 113599788 B CN113599788 B CN 113599788B CN 202110901640 A CN202110901640 A CN 202110901640A CN 113599788 B CN113599788 B CN 113599788B
Authority
CN
China
Prior art keywords
drone
processing system
athlete
player
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110901640.XA
Other languages
Chinese (zh)
Other versions
CN113599788A (en
Inventor
J·卡特
A·W·玛蒂
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Pillar Vision Inc
Original Assignee
Pillar Vision Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Pillar Vision Inc filed Critical Pillar Vision Inc
Publication of CN113599788A publication Critical patent/CN113599788A/en
Application granted granted Critical
Publication of CN113599788B publication Critical patent/CN113599788B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B64AIRCRAFT; AVIATION; COSMONAUTICS
    • B64CAEROPLANES; HELICOPTERS
    • B64C39/00Aircraft not otherwise provided for
    • B64C39/02Aircraft not otherwise provided for characterised by special use
    • B64C39/024Aircraft not otherwise provided for characterised by special use of the remote controlled vehicle type, i.e. RPV
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63BAPPARATUS FOR PHYSICAL TRAINING, GYMNASTICS, SWIMMING, CLIMBING, OR FENCING; BALL GAMES; TRAINING EQUIPMENT
    • A63B24/00Electric or electronic controls for exercising apparatus of preceding groups; Controlling or monitoring of exercises, sportive games, training or athletic performances
    • A63B24/0062Monitoring athletic performances, e.g. for determining the work of a user on an exercise apparatus, the completed jogging or cycling distance
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B64AIRCRAFT; AVIATION; COSMONAUTICS
    • B64UUNMANNED AERIAL VEHICLES [UAV]; EQUIPMENT THEREFOR
    • B64U80/00Transport or storage specially adapted for UAVs
    • B64U80/60Transport or storage specially adapted for UAVs by wearable objects, e.g. garments or helmets
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/0094Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot involving pointing a payload, e.g. camera, weapon, sensor, towards a fixed or moving target
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/13Satellite images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • G06V20/42Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items of sport video content
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B64AIRCRAFT; AVIATION; COSMONAUTICS
    • B64UUNMANNED AERIAL VEHICLES [UAV]; EQUIPMENT THEREFOR
    • B64U10/00Type of UAV
    • B64U10/10Rotorcrafts
    • B64U10/13Flying platforms
    • B64U10/14Flying platforms with four distinct rotor axes, e.g. quadcopters
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B64AIRCRAFT; AVIATION; COSMONAUTICS
    • B64UUNMANNED AERIAL VEHICLES [UAV]; EQUIPMENT THEREFOR
    • B64U2101/00UAVs specially adapted for particular uses or applications
    • B64U2101/30UAVs specially adapted for particular uses or applications for imaging, photography or videography
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30221Sports video; Sports image
    • G06T2207/30224Ball; Puck
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30221Sports video; Sports image
    • G06T2207/30228Playing field
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30241Trajectory

Abstract

A system for monitoring athlete performance during a sporting event uses a wearable drone having at least one camera or other sensor that captures or otherwise senses data. When the drone is used to monitor, for example, an object in a sporting event, the wearable drone may be detached from its user and it may hover or otherwise fly within a particular location of the object to be monitored. While in flight, the sensors of the drone may be used to capture information of the object, such as performance data or images, during the sporting event.

Description

System and method for monitoring athlete performance during a sporting event
The divisional application is a divisional application of an invention patent application with an application date of 21/2/2017, application number of 201780024691.0, entitled "system and method for monitoring objects in a sporting event".
Cross Reference to Related Applications
This application claims priority from U.S. provisional application No. 62/297,528, entitled "Systems and Methods for Monitoring Objects at transporting Events", filed 2016, 2, 19, which is incorporated herein by reference.
Technical Field
The present application relates to systems and methods for monitoring objects during sporting events or other types of events.
Background
In general, it may be desirable to monitor athletes or other objects at a sporting event in order to provide an assessment of the performance of the athlete or other information indicative of the sporting event. As an example, systems have been developed that help train an athlete to perform better or more consistently by measuring parameters indicative of the athlete's performance and providing feedback indicative of the measured parameters so that the athlete can be informed during a sporting event how he or she performed during the sporting event. In addition, some systems are used to monitor sporting events in order to provide statistical or other data about the sporting event for entertainment or training purposes. As an example, the system may monitor and report the length of a goal-shooting score in football, the speed of a baseball thrown by a pitcher, the speed of a player at sprinting, or information indicating the trajectory of an object such as football, baseball, basketball, golf, hockey, football, or volleyball.
Systems and methods for monitoring athletes or other objects at sporting events can be complex and expensive, requiring various types of sensors. In addition, sensors are typically mounted or positioned at predetermined locations, limiting the amount and/or type of data that can be captured by the sensor. There is a general need for efficient and inexpensive techniques for monitoring objects in sporting events and other types of events.
Disclosure of Invention
According to some embodiments, the present invention discloses a system for monitoring athlete performance during a sporting event, comprising: a drone configured to identify and track an athlete in a sporting event, the drone being configured to fly to a predetermined location to observe a sporting activity performed by the athlete in the sporting event, wherein the athlete launches an object or a belt object while performing the sporting activity, and wherein the drone has at least one sensor for sensing motion of the object or the athlete while the athlete launches the object or the belt object, and wherein the at least one sensor is configured to provide sensor data indicative of the motion; and at least one processor configured to determine, based on the sensor data, a first parameter indicative of an athlete's performance in launching or carrying the object; and an output device configured to provide feedback indicative of an athlete's performance while launching or carrying the object based on the first parameter.
According to some embodiments, wherein the drone is a wearable drone worn by an athlete.
According to some embodiments, wherein the at least one processor resides on the drone.
According to some embodiments, wherein the object is one of the group comprising football, soccer, baseball, volleyball, hockey puck and basketball.
According to some embodiments, wherein the predetermined position is relative to the athlete.
According to some embodiments, wherein the drone is configured to move with the athlete during the athletic activity based on the sensor data such that the drone remains substantially in the predetermined position relative to the athlete as the athlete moves.
According to some embodiments, wherein the drone is configured to move with the athlete during the athletic activity based on the sensor data.
According to some embodiments, wherein the predetermined location is behind the athlete.
According to some embodiments, wherein the at least one sensor is configured to capture images of the athlete, and wherein the drone is configured to identify the athlete based on the captured images.
According to some embodiments, wherein the drone is configured to identify the athlete based on a jersey worn by the athlete.
According to some embodiments, the drone is configured to operate in a first mode of operation for determining at least the first parameter, wherein the drone is configured to operate in a second mode of operation for determining at least a second parameter indicative of an athlete's performance in a sporting event, and wherein the drone is configured to observe a game situation of the sporting event and transition from the first mode of operation to the second mode of operation based on the game situation.
According to some embodiments, wherein the first parameter is an offensive parameter, and wherein the second parameter is a defensive parameter.
According to some embodiments, wherein the at least one processor is configured to determine a trajectory of the object based on the sensor data, and wherein the first parameter is based on the trajectory of the object.
According to some embodiments, wherein the at least one sensor is configured to capture an image, wherein the at least one processor is configured to convert the image from a viewpoint relative to a location of the at least one sensor to a viewpoint relative to a location of a head mounted display, and wherein the at least one processor is configured to send the converted image to the head mounted display.
According to some embodiments, wherein the at least one sensor is configured to capture a first image, wherein the at least one processor is configured to receive the first image and a second image captured by at least one sensor of a second drone, and wherein the at least one processor is configured to stitch the first image and the second image to form a composite image.
According to some embodiments, wherein the drone is configured to sense an event indicating that monitoring of the athletic activity with the drone will cease, and wherein the drone is configured to fly to a predetermined location in response to the sensed event.
According to some embodiments, further comprising a multi-lens camera on the athlete, the multi-lens camera having a plurality of lenses, wherein the at least one processor is configured to receive images captured by the multi-lens camera and stitch the images to provide a composite image.
According to some embodiments, wherein the composite image defines a 360 degree view around the athlete.
According to some embodiments, the present invention discloses a method comprising: flying the drone to a predetermined location to observe athletic activity performed by the athlete during the sporting event; tracking the athlete with a drone; sensing movement of an object or athlete with at least one sensor of the drone while the athlete launches or carries the object during the athletic activity; receiving sensor data indicative of motion from the at least one sensor; determining a first parameter indicative of an athlete's performance in launching or carrying an object based on the sensor data; and outputting feedback indicative of the athlete's performance while launching or carrying the object based on the first parameter.
According to some embodiments, further comprising wearing the drone by an athlete.
According to some embodiments, wherein the predetermined position is relative to the athlete.
According to some embodiments, further comprising moving the drone with the athlete during the athletic activity based on the sensor data.
According to some embodiments, further comprising: operating the drone in a first mode of operation to determine at least the first parameter; operating the drone in a second mode of operation to determine at least a second parameter indicative of an athlete's performance in performing a sporting activity at a sporting event; observing the match situation of the sports event by using an unmanned aerial vehicle; and switching the drone from the first mode of operation to the second mode of operation based on the race condition.
According to some embodiments, wherein the first parameter is an offensive parameter, and wherein the second parameter is a defensive parameter.
According to some embodiments, further comprising: capturing an image with at least one sensor; converting the image from a viewpoint relative to a location of the at least one sensor to a viewpoint relative to a location of the head mounted display; and displaying the converted image using the head mounted display.
According to some embodiments, further comprising: capturing a first image with the at least one sensor; capturing a second image with at least one sensor of a second drone; and stitching the first image and the second image to form a composite image.
According to some embodiments, further comprising: sensing an event indicating that physical activity is to be stopped with the drone; and flying the drone to a predetermined location in response to the sensed event.
According to some embodiments, further comprising: capturing a plurality of images using a multi-lens camera located on an athlete; and stitching the plurality of images to define a composite image.
Drawings
The disclosure may be better understood with reference to the following drawings. The elements of the drawings are not necessarily to scale relative to each other, emphasis instead being placed upon clearly illustrating the principles of the present disclosure. Moreover, in the several views, like reference numerals designate corresponding parts.
Fig. 1 is a block diagram illustrating an exemplary system for monitoring athletes or other objects at a sporting event.
Fig. 2 is a block diagram illustrating an exemplary drone, such as that shown in fig. 1.
Fig. 3 is a three-dimensional perspective view of a wearable drone that may be programmed for use with the monitoring system shown in fig. 1.
Fig. 4 is a three-dimensional perspective view of the drone depicted in fig. 3 worn on the wrist of a user.
Fig. 5 is a perspective view of a soccer player with a drone located behind the soccer player.
Figure 6 is a three-dimensional perspective view of a basketball player with the drone positioned behind the basketball player.
FIG. 7A is a three-dimensional perspective view of a golfer attempting to putt on the green of a golf course.
FIG. 7B depicts the image of the green of FIG. 7A displayed to a user for depicting an optimal trajectory of a putter on the green.
FIG. 7C depicts the image of the green of FIG. 7A displayed to a user for depicting a plurality of trajectories of a putter on the green.
Fig. 8 is a three-dimensional perspective view of the soccer player of fig. 5, the soccer player wearing at least one multi-lens camera.
FIG. 9 is a three-dimensional perspective view of the basketball player of FIG. 6 wearing at least one multi-lens camera.
FIG. 10 is a block diagram illustrating an exemplary processing system such as that shown in FIG. 1.
FIG. 11 is a block diagram illustrating an exemplary system for monitoring objects in a sports game space.
FIG. 12 is a block diagram illustrating an exemplary processing system such as that shown in FIG. 11.
FIG. 13 illustrates an exemplary sensing system, such as the sensing system depicted in FIG. 11, mounted on a pole above a basketball goal;
FIG. 14 depicts an exemplary depth map image captured by a depth sensor such as that depicted in FIG. 11;
FIG. 15 depicts an exemplary depth map image captured by a depth sensor such as depicted in FIG. 11 after removal of depth pixels corresponding to a plane of the moving surface;
fig. 16 depicts the exemplary depth map image depicted by fig. 15 after a basket template is superimposed on the image of the basket;
FIG. 17 depicts an exemplary process for calibrating a gravity-based coordinate system;
fig. 18 depicts a sensing system such as depicted by fig. 11 coupled to an aircraft such as a drone or other aircraft.
Detailed Description
The present disclosure relates generally to systems and methods for monitoring a subject during a sporting event or other type of event. A system according to one embodiment of the present disclosure uses a wearable drone having at least one camera or other sensor for capturing or otherwise sensing data. When the drone is used to monitor, for example, an object in a sporting event, the wearable drone may be separated from its user, and it may hover or otherwise fly within a particular location of the object to be monitored. While in flight, the sensors of the drone may be used to capture information of the object, such as performance data or images, during the sporting event.
FIG. 1 depicts an exemplary system 10 for monitoring objects in a sporting event or other type of event. As shown in fig. 1, the system 10 includes a mobile drone 15 that can be flown to a desired location in order to monitor objects. In one embodiment, the drone 15 is worn by a user, such as an athlete intending to attend a sporting event (e.g., a sporting event or a training session).
In the embodiment shown in fig. 1, the drone 15 is removably coupled to the user. As an example, the drone 15 may be mounted on or otherwise coupled to a holding device 17, the holding device 17 holding the drone 15 and being attachable to a user. In this regard, the holding device 17 may comprise a wrist strap that can be worn around the user's wrist, and the drone 17 may be removably coupled to the wrist strap such that the drone 17 may be detached from the wrist strap when used for monitoring. In an alternative embodiment, the drone 15 may form a wrist strap that is removably coupled to (e.g., wrapped around) the user's wrist, and the drone 15 may be separated from the user by unlocking the wrist strap. In other embodiments, other types of straps (e.g., armbands, ankle straps, head straps, etc.) for temporarily securing the drone 15 to other body parts are possible. Additionally, the retaining device 17 may include a clip or other type of attachment apparatus for attaching the device 17 to a user's clothing or body part. In other embodiments, various other devices and techniques for temporarily securing the drone 15 to the user are possible.
Fig. 2 depicts an exemplary embodiment of the drone 15. As shown in fig. 2, the drone 15 includes control logic 22 for generally controlling the operation of the drone 15, as will be described in greater detail below. Control logic 22 may be implemented in software, hardware (e.g., logic gates), firmware, or any combination thereof. In the exemplary drone 15 shown in fig. 2, the control logic 22 is implemented in software and stored in memory 25.
Note that when implemented in software, the control logic 22 may be stored and transmitted on any computer-readable medium for use by or in connection with an instruction execution device that can fetch and execute the instructions. In the context of this document, a "computer-readable medium" can be any means that can contain or store a computer program for use by or in connection with the instruction execution apparatus.
The exemplary drone 15 depicted in fig. 2 includes at least one conventional processor 28, such as a Digital Signal Processor (DSP) or Central Processing Unit (CPU), which communicates with and drives other elements within the drone 15 through a local interface 31, the local interface 31 including at least one bus. As an example, to implement any of the data processing functions described herein with respect to the drone 15, when such control logic 22 is implemented in software, the processor 28 may be programmed with instructions of the control logic 22 for executing the instructions in accordance with techniques known in the art.
As shown in fig. 2, the drone 15 has an input interface 35 that may be used to receive input. By way of example, the input interface 35 may include a keypad, keyboard, mouse, buttons, switches, or other type of device for receiving manual input or a microphone for receiving audio input. The drone 15 also has an output interface 36 that may be used to provide output. By way of example, output interface 36 may include a display (e.g., a liquid crystal display) for displaying text, images, or other information, or a speaker for providing audio output. In some cases, a device (such as a touch-sensitive display) may be used to implement input interface 35 and output interface 36 by receiving input and providing output.
The drone 15 also includes one or more sensors 44 for capturing information of interest during the monitoring activity. As an example, sensor 44 may be implemented as an optical sensor (e.g., a camera) for capturing an image of a scene. In one embodiment, the sensor 44 includes a two-dimensional camera for capturing two-dimensional images, and the drone 15 also has a depth sensor 47 for sensing depth (e.g., distance from the depth sensor 47 to one or more objects). By way of example, the depth sensor 47 may capture depth images for Monitoring Objects, such as described in more detail below and in U.S. patent application Ser. No. 14/874,555, filed 10/5/2015 for Monitoring Objects in additive Playing Spaces, which is incorporated herein by reference. Using sensors from such depth47, a ground plane, such as a sports surface, may be located to determine the direction of gravity. Such a depth sensor 47 may operate using infrared radiation, as sold by Microsoft corporation
Figure BDA0003200101020000081
As does the camera, although other wavelengths are possible. Many conventional depth sensors, e.g. </or>
Figure BDA0003200101020000082
Cameras, typically work by projecting a pattern (e.g., dots or lines) of Infrared Radiation (IR) on a scene and measuring the time it takes for the IR to be reflected back to a sensor. Variations in the profile of the reflective surface affect the time required for energy to be reflected and/or the amount of light reflected (effectively "distorting" the pattern), thereby enabling the camera to estimate the profile based on the return. Use of such depth sensors in conditions susceptible to large amounts of ambient light (e.g., outdoors) can be problematic because ambient light, which is noise measured by the sensor, can substantially "wash out" the IR returns, making them undetectable. However, techniques have been developed to configure the depth sensor to selectively accept return from the area scanned by the projector in order to limit the amount of noise introduced by the ambient light, thereby enabling the depth sensor to be used in conditions involving large amounts of ambient light.
In some embodiments, the sensors 44 may be implemented as proximity sensors for sensing whether an object is within a particular proximity or distance of the drone 15. Various other types of sensors may be used in other embodiments. Sensor data 49 indicative of information sensed by sensor 44 is stored in memory 25. Such sensor data 49 may be raw data captured by sensor 44 or may be processed data resulting from processing such raw data by control logic 22. Additionally, when sensor 44 is implemented as a camera, sensor data 49 may define an image captured by sensor 44. Note that the sensor 44 and depth sensor 47 need not be on the drone 15. As will be described in greater detail below, the sensor 44 or depth sensor 47 may be worn by the user, such as on a garment or head mounted display, or reside elsewhere and communicate wirelessly with the processing system 46.
The drone 15 also has a wireless communication interface 45 for allowing it to communicate wirelessly with other devices. By way of example, wireless communication interface 45 may include a Radio Frequency (RF) radio capable of transmitting and receiving wireless RF signals. As shown in fig. 1, the drone 15 wirelessly communicates with a processing system 46 using a wireless communication interface 45 to provide sensor data 49 to the processing system 46. If desired, the processing system 46 may be configured to send commands for controlling the operation of the drone 15. The processing system 46 may also be configured to process and analyze the sensor data 49 as needed. Note that the processing system 46 may be implemented in hardware or any combination of hardware, software, and/or firmware. By way of example, the processing system 46 may include one or more processors programmed with instructions to perform the data processing functions described herein for the processing system 46. In this regard, the processing system 46 may be implemented as one or more computers, such as a desktop, laptop, handheld device (e.g., a smart phone), or mainframe computer. In some embodiments, the processing system 46 may be integrated with the drone 15 or otherwise reside on the drone 15 such that wireless communication of the sensor data 49 is unnecessary, or the processing system 46 may be tethered on the drone 15, allowing the sensor data 49 to be transmitted to the processing system 46 via a physical connection (e.g., one or more lines). Additionally, the drone 15 may store the sensor data 49 during monitoring and may download or otherwise provide the sensor data 49 to the processing system 46 after monitoring. The operation of the processing system 46 will be described in more detail below.
In this regard, fig. 10 depicts an exemplary embodiment of a processing system 46. As shown in FIG. 10, the processing system 46 includes control logic 122 for generally controlling the operation of the system 46, as will be described in greater detail below. The control logic 122 may be implemented in software, hardware (e.g., logic gates), firmware, or any combination thereof. In the exemplary processing system 46 shown in FIG. 10, the control logic 122 is implemented in software and stored in the memory 125. Note that when implemented in software, the control logic 122 can be stored and transmitted on any computer-readable medium for use by or in connection with an instruction execution device that can fetch and execute instructions.
The exemplary processing system 46 depicted in fig. 10 includes at least one conventional processor 128, such as a Digital Signal Processor (DSP) or Central Processing Unit (CPU), which communicates with and drives other elements within the system 46 via a local interface 131, the local interface 131 including at least one bus. By way of example, to implement the functionality described herein for the processing system 46, when such control logic 122 is implemented in software, the processor 128 may be programmed with instructions of the control logic 122 for executing the instructions in accordance with techniques known in the art.
As shown in fig. 10, the processing system 46 has an input interface 135 that can be used to receive input. By way of example, the input interface 135 may include a keypad, keyboard, mouse, buttons, switches, or other type of device for receiving manual input or a microphone for receiving audio input. The processing system 46 also has an output interface 136 that can be used to provide an output. By way of example, the output interface 136 may include a display (e.g., a liquid crystal display) for displaying text, images, or other information, or a speaker for providing audio output. In some cases, input interface 135 and output interface 136 may be implemented using a device (such as a touch-sensitive display) by receiving input and providing output. The output interface 136 may be integrated with a component of the processing system 46. By way of example, the output interface 136 may be a display screen of a smart phone having one or more processors for performing the data processing functions of the processing system 46 described herein.
The processing system 46 also has a wireless communication interface 145 for allowing it to communicate wirelessly with other devices. By way of example, the wireless communication interface 145 may include a Radio Frequency (RF) radio capable of transmitting and receiving wireless RF signals. As shown in fig. 1, the processing system 46 wirelessly communicates with the drone 15 using a wireless communication interface 145 to receive the sensor data 49. If desired, the processing system 46 may be configured to use the wireless communication interface 145 to transmit commands for controlling the operation of the drone 15. The processing system 46 may also have a network interface 147, such as a modem, for enabling the processing system 46 to communicate with a network (e.g., a Local Area Network (LAN), a Wide Area Network (WAN), or other type of network As examples, the processing system 46 may communicate with other devices via the Internet or other type of network to provide access to sensor data 49 or other information processed by the processing system 46. Furthermore, as noted above, components of the processing system 46 may reside in various locations, including the drone 15. By way of example, the same processor may be used to execute the instructions of the control logic 22 shown in FIG. 2 and the control logic 122 shown in FIG. 10.
Note that the sensors 44 may include a location sensor, such as a Global Positioning System (GPS) sensor, for sensing the location (e.g., geographic coordinates) of the drone 15. Such position sensors may be used to assist in navigating or positioning the drone 15 as desired. As an example, the position of the drone 15 may be compared to the position of another object (e.g., an athlete) to move the drone 15 to a desired position relative to the other object. As an example, the athlete of interest may wear a position sensor that sends position coordinates to the processing system 46, and the processing system 46 may be configured to compare such position coordinates to the position coordinates received from the drone 15 in order to determine a desired position of the drone 15 (e.g., a predetermined position from the athlete). The processing system 46 then sends a command to the drone 15 to move it to the desired location so that the drone 15 is the desired location from the athlete of interest. For other embodiments, the position sensor may be used in other ways.
The processing system 46 may be communicatively coupled to an output interface 50, such as a display device or a printer, for providing output to a user. In some embodiments, the output interface 50 is separate from the components of the processing system 46. In such examples, the output interface 50 may communicate with the processing system 46 wirelessly or via one or more physical connections. By way of example, the processing system 46 may be implemented as a laptop computer or other type of computer that wirelessly communicates with a smart phone or other type of computer having an output interface 50. Note that a network may be used for communication between the components of system 10. As an example, a network, such as a Local Area Network (LAN) or a Wide Area Network (WAN), may be used for communication between the drone 15 and the processing system 46 or between the processing system 46 and the output interface 50. In one embodiment, processing system 46 is configured to communicate with one or more output interfaces 50 using the internet and/or a cellular network, although other types of configurations are possible in other embodiments.
As shown in fig. 2, the drone 15 has a flight control system 52 for enabling the drone 15 to fly in the air. As an example, the flight control system 52 may have a controller and one or more propellers or other propulsion devices for propelling the drone 15 under the control of the flight controller, as desired. Such flight controllers may be implemented in hardware or any combination of hardware, software, and/or firmware. For example, the flight controller may include one or more processors programmed with software to implement the data processing functions of the flight controller described herein.
Flight control system 52 may include any number and type of airfoils (e.g., wings or rotors) for providing lift. By way of example, fig. 3 illustrates a conventional drone 65 that may be programmed or otherwise controlled or modified to implement a drone 15 for monitoring athletes, as described herein. The drone 65 of fig. 3 has a main body 66 with four flexible arms 69 extending from the main body 66. At the end of each arm 69 is a propeller 73 that rotates in the direction and under the control of a flight controller (not shown) housed within the body 66 in order to generate lift and propel the drone 65 in flight. The drone 65 also has a camera 77 (e.g., a video camera or other type of camera) for capturing images while the drone 65 is in flight. Note that the arms 69 are able to flex and detachably couple to each other so that the drone 65 may be worn on the user's wrist, similar to a watch, as shown in fig. 4. When monitoring is required, the arms 69 may be separated from each other so that the drone 65 may be removed from the user's wrist, and the arms 69 may then be positioned as shown in fig. 3, allowing the drone 65 to fly under the control of its flight controller. Even though the arms 69 are flexible enough to allow them to bend around the user's wrist, the arms 69 have sufficient rigidity to allow them to maintain their shape while the drone 65 is flying under the aerodynamic forces generated by the propeller 73. In some embodiments, other types of drones may be used. For example, a drone may have wheels, tracks, or other devices to enable it to move along the ground or other surface.
During operation, the drone 15 may monitor various types of parameters. For example, a user wearing the drone 15 may release it for flight before performing a sporting activity, such as training of a sporting event or a game of a sporting event, such as basketball, football, baseball, golf, hockey, football, volleyball, skateboarding, and X-games. After being released for flight, the drone 15 may be designed to hover or otherwise fly within an area, such as a distance from the user or other object and/or at a particular altitude, and capture sensor data 49 indicative of the user's performance in the athletic activity. As an example, when a user makes a basketball shot by launching a ball into a basketball goal, the at least one sensor 44 is configured to provide sensor data 49 indicative of the shot, and the processing system 46 is configured to analyze the sensor data 49 to determine one or more metrics indicative of the quality of the basketball shot, such as the release height, release speed, shooting height, entry angle, entry speed, shooting trajectory, hit/miss (i.e., whether the basketball passed through the basket during the shot), or ball speed or velocity at any point on the shooting trajectory. Based on sensor data 49, such as an image of a user making a basketball shot, the processing system 49 may determine the type of shot, such as whether the shot is a jump shot, a top shot, or a tripper. Note that the processing system 46 may use the position of the shooter relative to the basketball goal at the time of shooting to determine the shot type. For example, a shot with a certain proximity close to the goal on the side of the edge when the player moves horizontally may be determined as an upper shot, while another shot that is more than a certain distance from the goal may be determined as a third shot. Exemplary metrics that may be determined, analyzed, or otherwise processed by processing system 46 are described in the following documents: U.S. patent No. 7,850,552 entitled "subject Detection and Feedback System," published 12/14/2010, which is incorporated herein by reference, and U.S. patent application No. 12/127,744 entitled "Stereoscopic Image Capture with Performance outlet Prediction in sports Environments," filed 5/27/2008, which is incorporated herein by reference.
In some embodiments, the sensor 44 comprises a camera capable of capturing panoramic images (e.g., 360 ° view). Such cameras may be configured to capture multiple images as the camera is moved, and then stitch or otherwise combine the images together to form a panoramic image. In another embodiment, the camera has multiple lenses that receive light from multiple directions, enabling the camera to capture images from different directions simultaneously. The panoramic camera is configured to stitch the images together to form a panoramic image. As an example, a camera may have three lenses oriented at 120 ° with respect to each other to capture a 360 ° view of the surroundings of the camera.
In some embodiments, the control logic 22 is configured to provide input to the flight control system 52 in order to automatically position the drone 15 in a predetermined position relative to a particular user or other object. As an example, the control logic 22 may position the drone 15 at a particular distance and/or altitude from the athlete participating in the sporting event. As the athlete moves, the control logic 22 may sense his motion by using one or more sensors 44 and then provide input to the flight control system 52 so that the drone 15 is moved to track the athlete's motion. As an example, the control logic 22 may attempt to maintain the drone at a constant position (e.g., distance and/or altitude) from the athlete as he moves. When the athlete is moving or otherwise participating in an event, information indicative of his performance and movement is captured by sensor 44 and stored as sensor data 49 in memory 25.
The control logic 22 may be configured to distinguish the athlete of interest from other athletes at the event in order to assist the control logic 22 in tracking the athlete's movements. As an example, at least one of the sensors 44 may be configured to capture an image of the athlete's face, and the control logic 22 may be configured to employ known facial recognition algorithms to distinguish the athlete of interest from other athletes. If the athlete of interest is wearing a jersey having numbers printed on the jersey, the control logic 22 can analyze the image of the user's jersey to distinguish him from other athletes. As an example, control logic 22 may analyze a captured image of an athlete to determine a jersey color and jersey number for identifying the athlete. In this regard, in many sporting events, each possible jersey color and number combination is unique, such that any player can be identified by analyzing his jersey. In other embodiments, other techniques for identifying athletes or other interested users are possible.
In one example, the drone 15 is used to monitor the quarterback during a football match or exercise. The drone 15 may be positioned at a certain location relative to the quarterback to allow for proper monitoring and with very little risk of the drone 15 being struck by an athlete or object during athletic practice. As an example, the drone 15 may be positioned approximately 20 feet in the air and approximately 20 feet behind the quarter guard (i.e., on the side of the quarter guard opposite the line of contention). Therefore, the drone 15 should be kept high enough so that the athlete cannot reach it during the sport. Furthermore, when the quartet guard throws a pass, he usually throws (towards and beyond) the line of contention forward, so that the position of the drone 15 behind the quartet guard reduces the likelihood that the drone 20 is hit by the ball during the pass, even though the drone 15 may be at a height at which the ball may pass.
During the race, the drone 15 may be configured to capture images of quarterbacks and/or other objects. As an example, the drone 15 may capture and wirelessly transmit a video feed so that it may be displayed for entertainment or other purposes. By way of example, the video feed may be included in a television or other type of video broadcast of a game. Video feeds may also be used for other purposes. As an example, a video feed may be stored and later displayed to a quarterback or coach to assist in the training of the quarterback.
In one embodiment, the processing system 46 is configured to analyze the images captured by the drone 15 and identify quarterwards within the images. The processing system 46 also identifies the soccer ball within the image based on the color and shape of the soccer ball. Based on the image, the processing system 46 is configured to determine when the quarterback throws the football. There are various techniques available for determining when to throw a football.
In this regard, when a football is thrown by his hands with his forearms and hands over his shoulders while holding the football, it is generally desirable for the quarterback to have a certain profile, hereinafter referred to as the "throw profile". When his forearms and hands are in such a profile, the football will be released by the quarterback, causing the football to separate from the quarterback's hands. The processing system 46 may be configured to detect the throw when (1) the forearms and hands of the quarterback are in the expected throw profile and (2) the soccer ball is separated from the quarterback hand. Note that separation of the football from the quarterback when his forearms and hands are not in the throwing profile may indicate the occurrence of another event, such as passing, throwing, or dropping the ball.
When a pass is detected, the processing system 46 is configured to track the trajectory of the pass and calculate various trajectory parameters indicative of the quality of the pass. By way of example, the processing system 46 may calculate a release height, a release angle, a velocity, a spin rate, a maximum pass height, a pass distance, or other parameters that may be of interest. Note that various parameters, such as release height, pass height, and pass distance, may be determined by using a depth sensor 47, which depth sensor 47 is capable of measuring the depth of the ball (relative to the sensor) and comparing such depth to the depth of a ground plane (e.g., a sports surface) or other object within the depth image. Furthermore, the speed or speed may be calculated by estimating the distance the football travels between the track points and measuring the time required for the football to travel between the track points. The distance divided by time yields the velocity of the ball.
In one embodiment, the position of the football at various trajectory points (e.g., coordinates in free space) is determined, and these points are used to estimate a trajectory curve representing the trajectory of the ball during a pass. In this regard, once the ball is released, gravity is typically the dominant force acting on the ball during flight, and if the direction of gravity is known, various parameters, such as velocity at any point along the trajectory, may be calculated using the estimated trajectory profile. In one embodiment, the processing system 46 determines the direction of gravity using techniques similar to those described below and in U.S. patent application Ser. No. 14/874,555, and converts the coordinates provided by the depth sensor 47 into a gravity-based coordinate system so that the direction of gravity is known relative to the trajectory of the ball. U.S. Pat. No. 7,850,552 and U.S. patent application Ser. No. 12/127,744 describe various trajectory parameters and techniques for determining trajectory parameters.
It should be noted that various parameters may indicate the outcome of a pass and may depend on the action or position of another player, such as a catcher attempting to catch the pass. In this regard, the images captured by the drone 15 may include the ball receiver, as well as one or more defensive players attempting to defend the pass. The processing system 46 may identify a receiver attempting to capture a pass, hereinafter referred to as a "target receiver," using various techniques. For example, by tracking a football, the processing system 46 may determine a trajectory position (hereinafter "trajectory end point") at which the path or speed of the football along its trajectory is substantially interrupted, indicating that the football hit an object, such as the ground or a player of a football game or exercise. A player wearing a particular color jersey closest to the end of the trajectory can be identified as the target catcher.
By analyzing the images captured after the soccer ball reaches the endpoint, the processing system 46 may determine whether the target catcher has caught the football (i.e., the pass is complete). As an example, if the football appears to remain in the target catcher's hand for at least a threshold amount of time based on video images captured by the drone or other camera, the processing system 46 may determine that the football was caught. On the other hand, if the video images show that the soccer ball remains in the hands of a defensive player wearing a different color jersey for at least a threshold amount of time, the processing system 46 may determine that a pass was intercepted. If it is determined that the football hit the mid-ground before the pass or intercept is completed, the processing system 46 may determine that the pass is not complete. In other embodiments, other techniques may be used to determine the resulting status of a pass (e.g., complete, blocked, or incomplete). Note that sensors (e.g., cameras) in addition to or in place of sensors on the drone 15 may be used to provide information to the processing system 46.
Over time, the processing system 46 may collect and store various statistics regarding quarterback performance, such as total number of attempts, total number of completions, total number of intercepts, percentage of completions, percentage of intercepts, average release height, average release angle, average speed, average rotation rate, average maximum pass height, average pass distance, or other statistics that may be of interest. Further, in collecting statistics regarding a plurality of athletes, the processing system 46 is preferably configured to identify an athlete such that appropriate data may be associated with the identified athlete. By way of example, the athlete may be identified based on a number on the jersey, as described above, or by other types of techniques, such as facial recognition or other known techniques for identifying individuals.
Note that the performance data collected by processing system 46 may be categorized in any manner as desired. As an example, for a quarterback, statistics may be calculated based on pass distance. For example, a total number of attempts, total number of completions, total number of intercepts, percentage of completions, percentage of intercepts, etc. may be calculated for a pass thrown within a certain distance range (e.g., 0 to 10 yards), while the same or similar statistics for another distance range (e.g., 10 to 20 yards) may be tracked separately. In addition, processing system 46 may implement algorithms for calculating various qualitative information about the pass.
As an example, based on the location and speed of the target receiver at a point, such as a quarter break release of a football or a point while the football is in flight, the processing system 46 may identify an area in the playing space, hereinafter referred to as a "target area," where the football is ideally thrown to complete a pass to the target receiver. The target area may also be based on various other factors, such as the position and speed of one or more defenders when the quarterback releases the football or at a point while the football is in flight. The processing system 46 may also compare the trajectory of the pass to the target area to determine whether the pass was directed to the target area (e.g., whether the trajectory intersects the target area) or the distance of the trajectory from the target area. Pointing a pass at or a small distance from the target area may generally indicate a better quality pass regardless of whether the pass is actually completed. The processing system 46 is configured to calculate various parameters based on the pass trajectory relative to the target area. As an example, processing system 46 may determine an average distance to the target area for a plurality of passes, noting that the target area may be at different locations for different passes. Further, the size of the target area may be based on pass distance or other factors. For example, a shorter pass may have a smaller target area and a longer pass may have a larger target area. The target area data may also be classified based on pass distance or other parameters. As an example, an average distance to a range target area for a pass within one range of distances (e.g., 0 to 20 yards) may be determined, and an average distance to a range target for a pass within another range of distances (e.g., 20 to 40 yards) may be determined separately.
Note that the drone 15 may be configured to monitor data 49 from the sensors 44 and provide input to the flight control system 52 based on such data 49. As an example, the control logic 22 may be configured to identify athletes being monitored (e.g., quarterback in this example) and change the position of the drone 15 based on the movements of such athletes. As an example, the control logic 22 may control the drone 15 so that it remains at or near a particular location of the quarterback as he moves. Thus, if the quarter satellite rolls to the left, the drone 15 may automatically move to the left so that it remains directly behind the quarter satellite. If the quarterback is moved forward along the field, the drone 15 may also be moved along the field to keep a distance behind the quarterback. In other embodiments, the drone 15 need not be within a zone of motion. As an example, the drone 15 may be positioned on the sideline (away from the sports area) in the air and moved back and forth along the sideline based on movement of objects in the sports field. For example, as the athlete of interest progresses sequentially along the course, the drone 15 may move along the sideline in a corresponding manner. The drone 15 may also be located at the end of the field, for example behind the goal posts or the tip region. The drone 15 may be moved along the boundaries of other fields or courts for other sports, such as football, hockey, volleyball, basketball, tennis, and the like.
The control logic 22 may also implement a collision avoidance algorithm to protect the drone 15 from damage and to prevent the drone 15 from interrupting the performance of the race. As an example, based on at least one sensor 44, such as an image captured by a camera or measurements taken by a proximity sensor, the control logic 22 may determine that a collision with an object (e.g., a football, goal post, person, or other drone 15) is imminent, and then provide input to the flight control system 52 in an attempt to move the drone 15 in a manner that avoids the collision. The control logic 22 may also be configured to take certain actions, such as changing the state of the drone 15, in order to protect the drone 15 from an impending collision. As an example, if a component of the drone 15 (e.g., the camera or other sensor 44) is extended, the control logic 22 may retract such component in order to reduce the likelihood of the component being damaged due to the collision. Various other actions may be taken to protect the drone 15 from damage.
It is also noted that any player or other individual in a soccer game or exercise (e.g., referee, coach, trainer, cheering player, mascot, etc.) may be monitored by the drone 15 according to similar techniques described above for quarterback, and other sports players or individuals may also be similarly monitored. As an example, the drone 15 may be configured to monitor the performance of a shooter in a football match or practice. In such an example, the drone 15 may be positioned behind a shooter, on a sideline, behind a goal post, or other location as described above. A video image of a shooter may be captured while he is kicking a football. Based on the images, the processing system 46 may determine various parameters indicative of goal performance or quality. As an example, the processing system 46 may measure foot speed or determine the location on a soccer ball where a kicker is kicking the soccer ball during shooting. Additionally, as described above for passing, the drone 15 may capture images of the football as the soccer ball flies from kicks, and the processing system 46 may analyze the captured images to determine the trajectory of the football. Based on the trajectory, the processing system 46 may determine various parameters, such as the speed of the football, the spin rate, the distance traveled, the lift angle at which the trajectory begins when the soccer ball rises after kicking (i.e., the angle relative to a horizontal or playing surface), or other parameters that may be of interest. Based on the trajectory and/or comparison of the images of the soccer ball relative to the goal posts, the processing system 46 may also determine whether the goal scoring was successful or whether the football passed through or past the positions of the goal posts. As an example, the processing system 46 may determine a horizontal distance of the trajectory from a point, such as the center of the scoring area (i.e., the middle point between the goal posts). If the goal scoring is unsuccessful, the processing system 46 may determine the horizontal distance of the trajectory from the nearest goal post (indicating how far the goal scoring was missed).
Note that the position of the drone 15 may be controlled based on situational sensing of race conditions. As an example, the control logic 22 may control the drone 15 to operate in a mode, referred to herein as a "quarterback mode," in which the drone 15 is operated to monitor the action of quarterbacks, as described above. When an attack attempts to shoot, the control logic 22 may switch operation of the drone to another mode, referred to herein as a "shooting mode," in which the drone 15 is operated to monitor the performance of a kicker. As an example, in the quarterback mode, the drone 15 may be located behind the quarterback, as described above, and for the shoot mode, the drone 15 may be positioned in another location, for example on a sideline to better capture certain parameters, such as the lift angle. Further, when the ball changes sides, for example after flipping, the operation of the drone 15 may change from a mode of monitoring the quarterback of one team to a mode for monitoring the quarterback of another team (or other team members).
There are various techniques that may be used to determine when to switch the mode of operation of the drone 15. As an example, a user observing a game or practice may provide an input indicating a mode change when an event occurs in which a mode transition is to be performed. Such input may be received by an input interface of the processing system 46 or other device (such as a smartphone) computer (e.g., a laptop computer) or other device capable of receiving input), and such input may be wirelessly transmitted to the drone 15. In response, the control logic 22 may change the mode of operation of the drone 15 as indicated by the input. In another embodiment, the drone 15 may be configured to receive input directly from the user. As an example, one of the athletes, e.g., the monitored player or other user, may signal an input through a particular physical movement (e.g., waving his hand in a predetermined manner) or by providing a voice command to the drone. For body movements, the processing system 46 may be configured to analyze images captured by the drone 15 to determine when an athlete or other user signals an input.
In other embodiments, the decision as to when to change modes may be based on data from sensor 44. As an example, the processing system 46 may analyze images captured by the cameras of the drone 15 or other device to determine which team is in the attack and wirelessly send control information to the drone 15 to cause the control logic 22 to position the drone 15 according to the quarterback mode to monitor the quarterback of the attack. There are a variety of techniques available for determining which team is under attack. As an example, the drone 15 or other device may capture an image of a scoreboard, which may be operated to indicate which team is attacking (e.g., by displaying an image of a football next to the name or score of the attacking team). Based on the location of such images or other indications of the scoreboard, the processing system 46 may determine which team is attacking.
The processing system 46 may make certain contextual decisions (such as which team is in attack) based on activities occurring in the athletic field. As an example, prior to a game, teams often gather on the respective sides of a football. Defense is often closer to football than attack. Thus, based on the position of the team relative to the collection of footballs in the image captured by the drone 15, the processing system 46 may determine whether the team is under attack. In another example, certain officials are often located on specific sides of a ball, depending on which team is in attack. As an example, the master officer is typically located on the attack side of the ball. The processing system 46 may be configured to identify the referee (using the user identification techniques described above, such as facial recognition or clothing recognition) based on the referee's position relative to the football to determine which team is under attack. Additionally, as noted above, the processing system 46 may be configured to identify certain players, and it is often the case that a particular player (e.g., quarterback) is only active when offensive or defensive. The processing system 46 may be configured to determine which team is in attack based on which player or players are present on the field. As an example, if processing system 46 determines that a particular team's quarterback is on a field of play, processing system 46 may be configured to determine that such team is attacking. In this case, the processing system 46 may send commands to the drone 15 to cause the drone 15 to operate in a certain mode, such as a quarterback mode for monitoring quarterbacks. When one team's quarterward leaves the field of play, processing system 46 may determine that another team is attacking when system 46 detects that another team's quarterward is present on the field of play. If a team's quarterback leaves the playing field and if a field kicker of the same team enters the playing field, processing system 46 may determine that a goal scoring attempt is being made. In this case, the processing system 46 may send a command to the drone 15 for the drone 15 to operate in the shoot mode. In basketball, the processing system 46 may identify the player being taken to determine which team is attacking. In other examples, the presence of other types of athletes on a field or athletic field may be sensed in order to detect other types of game conditions and operate the drone 15 in other types of modes.
In a basketball game or practice, the drone 15 may be positioned at any point above the court or along the boundary of the court, as described above for the soccer field. In one example, the drone 15 is positioned in the air a distance (e.g., about 20 feet) above the basket of the basketball goal. Using sensors 44, such as cameras and/or depth sensors 47, the drone 15 may capture images of the basketball and players on the course. When the player attempts a shot hit, the drone 15 may capture images of the shooter and basketball as they shoot toward the basket. The processing system 46 may be configured to determine the trajectory of the ball and various parameters indicative of the performance of the shooter in a basketball shot. Exemplary techniques for determining these parameters are described in U.S. Pat. No. 7,850,552 and U.S. patent application Ser. No. 12/127,744.
The drone 15 may be configured to monitor images captured by the sensors 44 and control movement of the drone based on such images. As an example, the control logic 22 may be located a distance (e.g., a particular distance and direction) from the pitcher, such as about 10 feet behind the pitcher and about 10 feet in the air. As described above, for the quarterback mode, the drone 15 may move with the movement of the pitcher so as to maintain its relative position to the pitcher.
Note that the control logic 22 within the drone 15 does not have to monitor the captured images in order to locate the drone 15 as described herein. As an example, for any of the embodiments described herein, the processing system 46 may be configured to monitor captured images and remotely control movement of the drone 15 based on such images.
When the drone 15 is behind a pitcher or other athlete of interest, the image captured by the drone 15 has a perspective from the pitcher's viewpoint, as described above. That is, the image is very similar to what a pitcher or other athlete is monitored to see during the game. Such features may be beneficial for training or entertainment purposes. In this regard, an image may be recorded and subsequently presented to the pitcher or other athlete of interest so that he may review the game situation and his actions from substantially the same point of view he sees during the game. The images may also be broadcast or otherwise presented to fans who may view the game from the perspective of the pitcher or other athlete of interest. This type of viewpoint may also be provided for other movements. For example, positioning the drone 15 behind the quarterback in the quarterback mode described above allows the user to see the game situation from the approximate viewpoint of the quarterback. In some embodiments, the quadcast may wear a position sensor to provide data indicative of the location of the quadcast to the processing system 46, and the processing system 46 may be configured to convert coordinates of the image data to coordinates relative to a coordinate system associated with the quadcast such that the viewpoint of the image matches the viewpoint of the quadcast, as described in more detail below with respect to a golf putter.
The drone 15 may be used to monitor a golfer. In this regard, a particular golfer may be identified in images captured by the drone 15 or by other types of sensor data 49 using any of the identification techniques described herein, and the drone 15 may be positioned at a distance from the identified golfer, as described above with respect to other motions and as shown in fig. 7A. As an example, the drone 15 may be positioned in a particular location relative to the golfer to allow the sensor 44 to capture the golfer's swing and ball flight. As described above for other motions, the processing system 46 may be configured to determine the trajectory of the ball based on images or other types of sensor data captured by the drone 15. Based on the golfer's body or club movements and/or the trajectory of the golf ball during his swing, the processing system 46 may be configured to determine various parameters indicative of the golfer's performance.
While the golfer is putting, the processing system 46 may be configured to analyze the images captured by the drone 15 in order to determine the topography, including slope, of the putting surface (e.g., green). Using techniques similar to those described below and in U.S. patent application Ser. No. 14/874,555, the processing system 46 may be configured to determine the direction of gravity within the image in order to determine the inclination of the green surface relative to gravity. As an example, the processing system 46 may be configured to convert coordinates provided by the depth sensor 47 or other type of optical sensor from the sensor's coordinate system to a gravity-based coordinate system. As further described in U.S. patent application No. 14/874,555, the direction of gravity may be defined by first identifying a large plane within the viewing area of the drone from the depth sensor 47. The processing system 46 may assume that the direction of gravity is at a predetermined angle (e.g., 90 degrees) relative to the identified plane. In other embodiments, the drone 15 may have multiple accelerometers, and readings from the accelerometers may be used to determine the direction of gravity according to known techniques. The processing system 46 may also identify the object and determine that the direction of gravity is at a predetermined angle relative to the object. As an example, in golf, the processing system 46 may be configured to analyze the images captured by the sensor 44 to identify a green hole into which a ball is to be pushed. Processing system 36 can identify the ring formed by the edge of the hole and determine that the direction of gravity is perpendicular to the plane defined by such ring. In other embodiments, other techniques for determining the direction of gravity are possible.
In other embodiments, different techniques may be used to determine the topology of the pushrod surface. As an example, data indicating the topology of the greens (including hole placement) may be predefined and stored in a database or other form of memory accessible to the processing system 46.
In addition to determining the topology and direction of gravity of the green, the processing system 46 may also be configured to identify the golfer's ball within green holes and images as described above. Based on the topography of the putter surface, the position of the hole relative to the golfer's ball and the direction of gravity, the processing system 46 may be configured to calculate or otherwise determine the best path for the ball for the golfer to putt (i.e., the ball is driven into the hole). The processing system 46 may then provide feedback information to the golfer indicating such a path. By way of example, the processing system 46 may use the output interface 50 or 136 to display an image of a green, including the location of a hole and the golfer's ball. Within such an image, processing system 46 may display a virtual curve extending from the ball to the hole along a path corresponding to the optimal path determined by processing system 46. Thus, by viewing the image and particularly the virtual curve, the golfer is the best path to be able to see the putting ball.
Note that there are various ways in which feedback may be provided to the golfer. As an example, the image of the green described above may be displayed on a golfer's smartphone or other handheld or mobile device carried by the golfer. As an example, fig. 7B shows an exemplary image 200 that may be displayed to a user on the output interface 50 of a mobile device, such as a smartphone. The image 200 shows a green 205 having a hole 207, where a marker 211 is located in the hole 207 to mark the placement of the hole 207. The image 200 also shows a golf ball 215 resting on a green 205 and a virtual curve 218 representing the optimal path determined by the processing system 46 for the user's putt. Note that the image 200 may be captured by the sensor 44 of the drone 15 or otherwise, such as by a camera that may be mounted in a fixed location near the green 205.
In one embodiment, the virtual curve 218 is displayed in an augmented reality environment. As an example, as shown in fig. 7A, a golfer may wear an augmented reality Head Mounted Display (HMD) 216, e.g., augmented reality glasses 216, which allow light to pass through the lenses of the augmented reality HMD 216 so that the golfer may see the physical surfaces of the green 205 and other objects such as the hole 207 and marker 211. The augmented reality HMD 216 may then generate an image of the virtual curve 218 corresponding to the best path such that the virtual curve 218 generated by the interface 50 appears to be superimposed on the physical surface of the green 205 as seen by the golfer. In other embodiments, other techniques for providing feedback to the golfer regarding the best path are possible.
Further, the processing system 46 may display a plurality of paths that may be selected by the user. In this regard, the path of a successful putt depends not only on the green topology, but also on the velocity of the golf ball during the putt. In this regard, golf balls with putters having greater force tend to have greater momentum, which can alter the path the ball must take to reach the hole as it traverses inclined surfaces. Thus, for any given putter, there are typically multiple paths that will lead to successful results depending on the speed. The processing system 46 may display a plurality of virtual curves representing these paths and/or provide feedback indicating the desired speed for a particular path. As an example, one virtual curve 222 may be color-coded for one color of a fixed putter, while another virtual curve 218 may be color-coded for a soft putter with a different color, as shown in image 220 shown in FIG. 7C.
As described above, images captured by the sensor 44 of the drone 15 or sensors 44 residing elsewhere may be displayed to the user through the augmented reality HMD 216. In this case, it may be desirable to change the viewpoint of the image so that it is relative to the location of the HMD 216 rather than the location of the sensor 44. To perform this conversion, the processing system 46 preferably knows the approximate location of the sensor 44 and the approximate location of the HMD 216. Using these positions, the processing system 46 may be configured to adjust the images captured by the sensor 44 so that they appear as if they have been captured by the HMD 216. In adjusting the image, the processing system 46 may be configured to change the orientation of the image to account for differences in the perspective of the sensor 44 relative to the perspective of the user through the HMD 216. As an example, the coordinates of the images may be converted to coordinates relative to a coordinate system used by the HMD 216 such that the displayed images have a perspective suitable for viewing by a user wearing the HMD 216. In some embodiments, the sensor 44 may reside on the HMD 216 or otherwise be positioned such that no such conversion is required.
Note that there are various techniques that may be used by the processing system 46 to determine the location of the sensor 44 and HMD 216. As an example, the sensor 44 may be a fixed location, such as mounted near a green, and the location of the sensor 44 may be stored in a memory of the processing system 46. If the sensors 44 are on the drone 15, as described above, the drone 15 may have a location sensor, such as a Global Positioning System (GPS) sensor, for determining the location of the drone 15, and data indicative of the location may be sent to the processing system 46. Further, the HMD 216 may be similarly equipped with a position sensor, and data indicative of the position of the HMD 216 may be sent to the processing system 46. Note that the processing system 46 may reside on the HMD 216 or on the user such that wireless communication of location information of the HMD is unnecessary. In other embodiments, radio Frequency (RF) devices may be used at known locations (e.g., fixed locations on a golf course) to communicate with the sensors 44, drones 15, and/or HMD 216 to determine their respective locations using triangulation or some other algorithm for determining the location of objects.
In football, the drone 15 may be positioned in a particular location relative to a particular player, such as a player possessing the football, a goalkeeper, a player making a kick or a corner ball, or other players of interest, as described above with respect to other sports. As an example, the drone 15 may be positioned some distance behind the player (e.g., about 10 feet) at some height above the ground surface (e.g., about 10 feet). Alternatively, the drone 15 may be located to one side of the football pitch (e.g., the drone 15 may move up and down the sideline). As described above for other sports, the drone 15 may be moved based on captured images to hold the drone 15 in a particular position relative to the athlete of interest. In one embodiment, the position of the drone 15 is controlled based on the position of the soccer ball. As an example, the drone 15 may be controlled to hover over and move with the ball at a height (e.g., about 10 feet) such that the drone 15 continues to hover over the ball or at another predetermined location relative to the ball. As described above for other athletic activities, the processing system 46 may be configured to collect ball and player data based on the captured images to determine ball trajectories for various kicks and to determine various parameters indicative of player performance.
In tennis, the drone 15 may be positioned at a particular location relative to a particular player of interest, as described above for other sports. As an example, the drone 15 may be positioned some distance (e.g., about 10 feet) behind the athlete at a height (e.g., about 10 feet) above the ground surface. Alternatively, the drone 15 may be located to one side of the tennis court (e.g., the drone 15 may move up and down the sideline). As described above for other athletic activities, the drone 15 may be moved based on the captured images to maintain the drone 15 in a particular position relative to the player of interest, and the processing system 46 may be configured to collect ball and player data based on the captured images to determine the trajectory of the ball and determine various parameters indicative of player performance.
If desired, multiple drones 15 may be used to monitor race conditions. As an example, multiple drones 15 may be used to track different athletes simultaneously in accordance with the techniques described above. Further, additional drones 15 and/or sensors may be positioned at various locations to provide additional viewpoints. The processing system 46 may be configured to stitch together multiple images from multiple drones 15 or sensors at different locations to provide a larger composite image of the race condition.
When multiple drones 15 are used, the drones 15 may implement a collision avoidance algorithm, as described above, in an effort to avoid each other. In this regard, one drone may have a sensor 44 (e.g., a proximity sensor) for detecting another drone 15, and control its flight path based on such sensor in a manner that avoids the other drone 15. In one embodiment, the drones 15 are configured to wirelessly communicate with each other to help avoid collisions. As an example, the drones 15 may communicate location information to each other. For example, the first drone 15 may send its position coordinates to the second drone 15, the second drone 15 using such coordinates to determine the position of the first drone 15 so that it can control its flight path to avoid the first drone 15. In another example, when it is determined that a collision between two drones 15 is imminent, one of the drones 15 may send information of its intended flight path to the other drone 15, the other drone 15 using such information to select a flight path that avoids the collision. Thus, when the first drone 15 determines to take an avoidance maneuver to avoid the second drone 15, the second drone 15 may know the flight path of the first drone 15 that will result from the avoidance maneuver, helping the two drones avoid each other.
As noted above, there are various techniques available for controlling the drone 15, and such control may be autonomous, semi-autonomous, or manual. Additionally, the decision as to which sensor 44 can be activated or used at any time may be autonomous, semi-autonomous, or manual. Sensor calibration and positioning may also be autonomous, semi-autonomous, or manual. The processing system 46 may include one or more user interfaces for receiving user input that is converted into commands that are wirelessly transmitted to the drone 15. A single user may provide such input or control spread among multiple users. As an example, control may be switched from one operator to another, and the operators may be at the same site or remote from each other. Input from the operator may be used to control the flight and/or other operational aspects of the drone 15, such as sensor attributes (e.g., focal length of the camera).
When the monitoring of the event is completed, the drone 15 may be returned or moved to a predetermined location for storage or other purposes. The operator may direct the drone 15 to such a location, or the drone 15 may be configured to automatically fly or otherwise move the drone 15 to that location. As an example, the coordinates of the predetermined location may be preprogrammed into the control logic 22, the control logic 22 configured to provide input to the flight control system 52 such that the drone 15 automatically flies to the predetermined location.
As an example, the drone 15 may be stored at the base. At the base, the drone 15 may be connected to a power source, such as a battery or power outlet, to recharge one or more power sources (e.g., batteries) on the drone 15. As described above, when performing athletic activities of interest, the athletic activity may be sensed by sensors 44 of drone 15 or otherwise, and in response drone 15 may automatically leave its base and fly to a location for monitoring the athletic activity. Once the sporting activity is completed or monitoring is no longer required, the drone 15 may automatically fly back to its base or other location for storage.
As an example, assume that the drone 15 will be used to monitor a basketball player while exercising. When a player arrives at a basketball court and begins to take a shot (dribble) or shot on the basketball court, the sensor 44 or other sensor of the drone 15 (e.g., a sensor mounted at a fixed location near the court) may detect athletic activity (e.g., take a shot or shot), may detect the presence of a player at or near the court, or may receive input from the player or other user indicating that monitoring is to begin. In response to any of these events, the drone 15 may automatically fly to the desired location to monitor the athletic activity and begin monitoring as described above.
In this regard, the drone 15 may find certain references to help monitor athletic activity. As an example, the drone 15 may identify the athlete and then fly to a predetermined location relative to the athlete being monitored. In another example, the drone 15 may find a portion of a basketball court, such as a basket or court marker on a basketball goal, and fly to a predetermined location relative to the basket, other portion of the basketball goal, or the court marker. Other techniques for orienting and positioning the drone 15 in or near the field space are possible.
Once the player has stopped the athletic activity for at least a predefined amount of time, has departed from a certain proximity (e.g., departed from the court), or has indicated that monitoring is to be stopped (e.g., provided user input), the drone 15 may then automatically return to the base or other location for storage until the next monitoring event occurs. Similar techniques may be used to monitor other types of activities for basketball or other sports.
The information collected by the drone 15 (including the images or other sensor data captured by the drone 44) may be provided to the user as part of a subscription service. As an example, information may be wirelessly transmitted from the drone 15 to the processing system 46, with the processing system 46 storing the information in memory, such as in a database accessible to multiple subscribers. Access to the information may be controlled by a server in communication with a network, such as the internet. As part of the service, the subscriber may access the server over the network and download information from the server. As an example, a video stream of images captured by the drone 15 may be streamed from the server to one or more users. Such video streams may be transmitted in real time as the sporting event occurs, or the video data may be stored on a server for later (e.g., on-demand) access by the subscriber. Other types of information collected by the drone 15 may similarly be transmitted or stored in real-time for later access by subscribers. The collected information may be curated for social media use, if desired.
Additionally, the information collected by the drone 15 may be used as input for video games, including virtual reality video games and augmented reality video games. For example, the motion of athletes and other objects (e.g., football athletes and football in a football game) may be captured and recorded by the drone 15. Such information may then be used to reconstruct an animation of such motion as part of a video game or other content viewed by the user. Using a head-mounted display (e.g., virtual reality or augmented reality glasses) or other type of display, images of athletes and other objects in a sporting event may be displayed to a user to generate a virtual reality or augmented reality environment in which the user may participate. As an example, for a football game, the user may go to a football pitch and use a head mounted display that allows the user to see the actual football pitch while projecting the recorded images of the player into the user's eyes so that the user sees the images of the player as if a sporting event occurred on the football pitch on which he was located. Alternatively, images of the virtual football field may be projected so that the user sees the images of the players as if they were playing on the virtual football field. The system may be interactive such that when a user takes action in the video game, the game situation is affected. For example, if the user moves within a virtual or augmented reality environment near the ball carrier, the image of the ball carrier may be updated to reflect that he was intercepted. Similar techniques may be used to provide video games for other types of sports.
As described above, sensor 44 may include a camera for capturing images of a scene, such as video image frames that may be streamed to one or more users. In some embodiments, the drone 15 may have at least one multi-lens camera for capturing panoramic images of wide view (e.g., greater than 180 °). As used herein, a "multi-lens" camera refers to a camera having multiple lenses for receiving multiple images and stitching or otherwise combining the images from the multiple lenses together to form a composite image having a larger viewing angle than the images received by any one of the lenses. Such a multi-lens camera using multiple cameras is implemented by communicatively connecting the cameras to a processing system that stitches the images from the cameras to form a composite image. In at least one embodiment, the drone 15 has a multi-lens camera capable of capturing a 360 ° panoramic view, sometimes referred to as a "360 ° camera," although other types of cameras and images are possible. As an example, a 360 ° camera has been developed that provides a 360 ° panoramic view without stitching, and the drone 15 may use such a camera if desired. Cameras with a wide horizontal viewing angle (e.g., about 180 ° or greater) will be referred to herein as "wide-angle" cameras, regardless of whether stitching is used. Such wide-angle cameras may be used to capture images of objects on opposite sides of a user (e.g., objects in front of and behind the user) when worn by the user.
In some embodiments, the drone 15 also has at least one depth sensor 47 configured to capture panoramic depth images of the same or similar view relative to the wide angle camera described above. The panoramic depth image may have a wide viewing angle, for example greater than 180 °. In one embodiment, a two-dimensional (2D) video camera provides a panoramic image having a view angle of up to 360 °, while the depth sensor 47 provides a corresponding depth image having the same or similar view angle and usable to determine the depth of objects appearing in the image from the 2D wide-angle camera. As described above for panoramic video images, images from multiple lenses of the depth sensor 47 may be stitched together to form a composite depth image, but for some depth sensors 47 stitching may not be required to achieve a wide angle view.
In some embodiments, athletes at sporting events monitored by the drone 15 wear at least one wide-angle camera that provides panoramic video images from the viewpoint of the user wearing such camera. The image data from the camera may be wirelessly transmitted to the drone 15 and/or the processing system 46. As with the image data from the camera on the drone 15, the image data from the user's wide angle camera may be stored and displayed by the processing system 46 for viewing by the subscriber or other user. In other embodiments, other types of cameras may be worn by the user as desired. Note that the athlete may wear such a camera to provide video images from the athlete's perspective without using the drone 15.
The wide-angle camera may be attached to a user, such as a player at a sporting event, using various techniques. In one embodiment, the wide-angle camera has multiple cameras, referred to as "component cameras," each having a smaller angle of view than the panoramic image provided by the wide-angle camera. Each component camera captures an image of a sector within the camera view and sends the image to a processing system that stitches the images together to form a wide-angle panoramic image, e.g., a panoramic image having a 360 ° view angle. As an example, there may be three component cameras, each capturing an image having a view angle of about 120 ° to form a combined image having a view angle of 360 °. In other embodiments, other numbers of component cameras with different perspectives are possible.
In one embodiment, each component camera is mounted on a base that can be attached to a user. As an example, the base may be a band (e.g., a loop) that surrounds a body part of the user to secure the base, and thus the component, to the user. Such a band may be rigid or elastic. As an example, the belt may be worn around the torso (e.g., chest, waist, or hips), arms, legs, or head of the user. For football or baseball, the band may fit around the helmet of the football or baseball player. If desired, the component camera and/or band may be integrated with (e.g., embedded in) a helmet or other headgear. Regardless of the technique used to secure the component cameras to the user, the component cameras may form a camera circle capable of capturing images from opposite sides of the user, such as images from the front and back of the user. For a water polo, a ring of cameras may be placed on the user's swimming cap. In other sports, a circle of cameras may be mounted on other types of headgear, such as a hat (e.g., baseball), a visor cap, a helmet, a headband (e.g., football or basketball), and so forth. In other embodiments, a set of component cameras may be positioned around other body parts as described above.
If desired, a multi-turn component camera may be used. As an example, one circle of component cameras may be positioned around the head of the user, while another circle of component cameras may be positioned around the torso of the user. The panoramic image from one revolution may be combined (e.g., stitched) with the panoramic image from the other revolution to form a composite panoramic image. Alternatively, a circle of cameras may capture 2D images, and a circle of depth sensors may capture depth information of the captured images.
As an example, fig. 8 shows a football player with a multi-lens camera (e.g., multiple component cameras) embedded in the football player's helmet 301. The component cameras may be positioned inside the helmet 301 such that they are not visible in fig. 8. Each component camera may have a lens 306 that passes through the helmet 301, is positioned on an outer surface of the helmet 301, is positioned to receive light through an aperture in the helmet 301, or is otherwise positioned on the helmet 301 for receiving light from scenes including other athletes in the sporting event. In the embodiment of fig. 8, the lens 306 is arranged in a ring around the helmet 301. The wireless communication interface may also be coupled to the helmet 301 or otherwise positioned on the athlete to receive images captured through the lens 306 and wirelessly transmit the images to the processing system 46, which the processing system 46 may stitch or otherwise combine the images together to form a composite image, such as a 360 degree view of a scene around the athlete.
In fig. 8, there is also a strap 309 around the waist of the athlete. The component cameras may be embedded in band 309 or otherwise coupled to band 309. Each such component camera has a lens 306, the lens 306 passing through the band 309, positioned on an outer surface of the band 309, positioned to receive light through an aperture in the band 309, or otherwise positioned on the band 309 to receive light from scenes of other athletes involved in the sporting event. As with the component cameras coupled to the helmet 301, the component cameras coupled to the band 309 may provide captured images to a wireless communication interface that sends the images to the processing system 46, which the processing system 46 may stitch together to form a composite image, such as a 360 degree view of the scene around the player. In addition, the processing system 46 may stitch or otherwise combine the composite image derived from the component cameras coupled to the helmet 301 with the composite image derived from the component cameras coupled to the band 309 to form a larger composite image, such as a 360 degree view of the scene around the player. Note that other techniques may be used that enable the athlete to wear a multi-lens camera. As an example, the component cameras or lenses may be embedded or otherwise incorporated into an athlete's jersey and positioned around the athlete to provide a 360 degree view instead of using strap 309, or strap 309 itself may be embedded or otherwise incorporated into the athlete's jersey. In other embodiments, other techniques are possible.
Fig. 9 illustrates an embodiment in which a basketball player wears straps 321 around her head, similar to straps 309 of fig. 8. In this regard, the component cameras may be embedded in the band 321 or otherwise coupled to the band 321. Each such component camera has a lens 306, the lens 306 being exposed to receive light from scenes of other athletes involved in the sporting event. As with the component cameras coupled to the helmet 301, the component cameras coupled to the band 321 may provide captured images to a wireless communication interface that sends the images to the processing system 46, and the processing system 46 may stitch or otherwise combine the images together to form a composite image, such as a 360 degree view of a scene around the athlete.
Note that the component cameras need not be rigidly coupled to each other, or one ring of component cameras need not be rigidly coupled to another ring of component cameras. That is, the orientation and/or position of one component camera may be changed relative to the orientation and/or position of another component camera during monitoring. In one embodiment, a component capture camera for capturing 2D video images is used in conjunction with the capture depth sensor 47 for capturing multiple depth images from the perspective of an athlete at a sporting event. The depth images may be stitched or otherwise combined together to form a composite depth image that may be used to determine the distance of the object from the athlete in any direction. Using the depth image from one or more depth sensors 45, the field surface plane may be identified to help determine the direction of gravity, as described in U.S. patent application Ser. No. 14/874,555.
In addition to capturing images from the perspective of a player in a sporting activity, the drone 15 may also be used to capture images of a player wearing a wide-angle camera. This may help to understand the mind and mind of the athlete. As an example, a camera on the drone 15 or at another location may be used to capture images of the athlete's face, and such images may be analyzed to determine clues to the athlete's mood. For example, the direction the athlete is looking for may be determined and recorded. In addition, facial expressions of the athlete may be analyzed to estimate the degree to which the athlete is panic or concentrating. As an example, a smile may indicate that the athlete is relaxed, and a rapid eye movement without a smile may indicate that the athlete is stressed or panic.
Video images captured by a wide-angle camera may be used to provide a virtual reality or augmented reality environment to a user. From such video images, the user may view the sporting event from the perspective of the athlete wearing the wide-angle camera. As an example, when a wide-angle camera is worn by a quarterback in a football game, a user watching a video feed may be able to see the defensive line guards impacting the quarterback and the catcher's line of travel from the perspective of the quarterback. Using multiple component camera circles may help increase the vertical viewing angle of the captured image. Further, using a wide angle camera, the lineguard can be seen approaching the quarterback from multiple directions, e.g., fore and aft of the quarterback (blind side of quarterback). In other embodiments, multiple turns may be used for other purposes. For example, as described above, at least one camera on one circle may provide a two-dimensional video image, while at least one depth sensor 47 on the other circle may be used to provide depth information for pixels captured by the camera.
Note that on a moving object such as the drone 15, the image from the camera may be used to determine depth without using the depth sensor 47. In this regard, if a stationary object (e.g., a line on a field or court, a basketball hoop, or a football goal post) is found in two images taken from two different viewpoints as the camera moves, then it is assumed that the speed of the camera is known that triangulation or other similar techniques can be used to determine the depth of the object in the two images. In one embodiment, the processing system 46 is configured to determine its velocity based on flight sensors (e.g., airspeed and heading sensors) or other means (e.g., changes in coordinates from a position sensor) to identify stationary objects in the plurality of images, and to use such information to calculate the depth of pixels of the images captured by the drone 15.
As described above, using cameras on multiple athletes at a sporting event may provide additional data to facilitate understanding complexities or to provide more content for entertainment, training, or other purposes. As an example, in a football game, a camera may be attached to the quarterback and the catcher that is traveling the route. The camera on the ball receiver may be used to capture an image of a defensive player attempting to protect or "cover" the ball receiver. Based on the position of the defensive player relative to each of the receivers as determined by the images captured by the receivers 'cameras and/or the quartet guards' cameras, the processing system 46 may select one or more of the best receivers for receiving a pass from the quartet guards.
Note that the catcher selection may be based on other factors, such as on attributes collected by the processing system 46 during the game or determined by the processing system 46 regarding the defender. By way of example, the processing system 46 may maintain data regarding various performance attributes that each defender represents about the defender, such as his top speed, jump height, range, or other parameters indicative of the ability of the defender, such as a subjective or objective rating, hereinafter referred to as a defender rating, "indicating the effectiveness of the defender in covering the catcher. Such data may be predefined (e.g., stored in the processing system 46 prior to the race, or may be determined by the processing system 46 by monitoring a defensive player during the race). Using such data, the processing system 46 may analyze the defender's ability and his relative position to the ball taker to determine a value indicative of the probability that a pass thrown to the ball taker will be completed.
For example, based on the separation distance between the catcher and the defender's maximum vertical range and/or jump height, the processing system 46 may determine whether the defender is able to block a pass along the trajectory that reaches the catcher as he runs down the field, taking into account the fact that the catcher and defender may continue running during the pass until the ball reaches the catcher and/or defender. Based on the difference between the likely trajectory of the football and the vertical extent of the defensive player at the point where the football is likely to reach the defensive player, the processing system 46 may calculate a value, hereinafter referred to as a "completion indicator," indicating the probability that the pass will be completed. In this regard, the greater distance between the trajectory and the vertical extent of the defensive player generally allows the quarterback to have a greater error range in attempting to complete the pass, thereby increasing the probability that the pass will succeed.
The completion indicator may also be based on other factors. As an example, the completion indicator may be lowered for defensive players rated higher than other defensive players, as indicated by their respective defensive player ratings. The completion indicator may also be controlled based on the manner in which the defensive player previously performed in a similar situation. As an example, during a game, the processing system 46 may track and store attribute data of a defender for each pass he guards. Such data may include the separation distance between the defender and the catcher when the pass is thrown and/or other data such as whether the pass is completed, whether the defender guards the inner or outer shoulders of the catcher, and the maximum speed difference between the defender and the catcher. The processing system 46 analyzes the attribute data to find similar situations to the current set of game attributes and analyzes how the defender performs in similar situations (e.g., percentage completion of the identified situation). Based on this analysis, processing system 46 may adjust the completion indicator accordingly. The completion indicator may be based on many other factors in other embodiments.
Note that the completion indicator may also be based on attributes associated with the catcher, such as the catcher's speed, jump height, vertical range, and the like. As with the defenders, the processing system 46 may maintain attribute data about the ball catcher and search such attribute data for determining how the ball catcher performed under similar circumstances in the past and adjust the completion indicator accordingly. Similar situations involving the same defender and catcher may have higher results than other situations when analyzing attribute data.
Completion indicators are basically an assessment by processing system 46 of the extent to which a pass directed to a catcher is likely to be completed based on the relative positions of the defender and the catcher and/or other factors (e.g., abilities), such as, in similar cases, the abilities of the defender and the catcher and the performance records of the defender and the catcher. The processing system 46 may use such completion indicators to select the best catcher for catching a pass. As an example, the catcher associated with the highest completion indicator may be selected as the best catcher for capturing a pass. In other embodiments, other techniques for selecting the best catcher or set of catchers for receiving a pass from the quarterback are possible.
After selecting one or more best receivers, the processing system 46 may send data indicative of the selected receivers to the mobile device on the quarterback to assist the quarterback in selecting receivers for receiving a pass during the game. As an example, a helmet (a display device integrated with his helmet or with eyewear worn by the quadcan) may display graphics identifying one or more ball-takers selected by processing system 46 in an augmented reality environment. In this regard, the display device may project an image onto the eyes of the quarterback or the glasses worn by the quarterback such that the graphical elements appear superimposed on the selected one or more ball receivers, thereby indicating which ball receivers have been determined to be the best ball receivers to receive the pass. And (4) passing. In another example, the image may be projected such that the selected ball-catcher or a portion of the soccer field corresponding to (e.g., located at or near) the selected ball-catcher may appear to be highlighted or colored in some manner different from other portions of the soccer field. In other embodiments, other techniques for identifying the selected ball catcher are possible. Using information from the processing system 46, the quarterback satellite may select one catcher for the current game and throw the soccer ball to the selected catcher.
In addition to helping the quarterback select a catcher, the processing system 46 may help the quarterback select the appropriate trajectory. In this regard, the processing system 46 may detect the locations of the defensive players and, based on these locations, identify at least one trajectory for successfully completing a pass while avoiding the defensive players. Similar to the example described above, the identified one or more trajectories may be displayed to a quarterback in an augmented reality environment, where a virtual curve shows the identified trajectories. The quarterback may select one of the displayed trajectories and attempt to throw the football so that it follows the selected trajectory. As described above, more than one trajectory may be displayed, and the trajectories may be color-coded or otherwise marked based on the strength and/or release angle of the pass, both of which affect the trajectory of the pass.
In one embodiment, the one or more trajectories displayed to the quarterback are selected based on the system's performance as it evaluates when throwing a previous pass. In this regard, by monitoring the performance of the quarterback over time, the processing system 46 may learn pass limits associated with the quarterback, such as the arm strength of the quarterback (e.g., how fast or far the quarterback is able to throw a football). Based on such limitations, the processing system 46 may eliminate at least some traces that are deemed infeasible with the capability of a quarterback. By way of example, tracks that may require loft (track height) and distances beyond the quarterback capability may be omitted from the tracks displayed to the quarterback. Thus, the processing system 46 is deemed to be able to display the quartet guard to the quartet guard trajectory based only on the previous throwing performance. Thus, the quarterback is more likely to select a trajectory that will result in a successful outcome (e.g., pass completion). In addition, the processing system 46 may select a trajectory that is considered optimal (e.g., the highest probability of leading to a successful outcome) based on several factors, such as the possible trajectories calculated for the pass and the past performance of the quarterback. Such an optimal trajectory may be color coded differently than other displayed trajectories or otherwise highlighted by the processing system 46 so that the quarterback can readily discern which trajectory is considered optimal.
Note that similar techniques may be used for other types of athletes. As an example, a possible trajectory of a goal scoring attempt may be displayed to a kicker. In addition to selecting or otherwise defining a possible trajectory based on previous kicker performance, as monitored by the system, the processing system 46 may also receive input indicative of meteorological conditions such as wind speed, and compensate for the trajectory of the wind speed. As an example, sensors mounted on or otherwise positioned near the goal posts may measure wind speed and wirelessly transmit data indicative of the measured wind speed to the processing system 46, which may be used by the processing system 46 to calculate at least one kick trajectory. Thus, the kicker can see at least one trajectory, as adjusted for the wind, in order to successfully kick the soccer ball through the goal posts. This information can be used to help the kicker adjust his kicks to compensate for wind conditions. Furthermore, the kicker may be informed that his longest possible trajectory may not reach the goal posts based on the distance to the goal posts, the current wind conditions and the past kicking performance of the kicker. Such information may be useful for influencing certain game time decisions, such as whether to attempt a goal during a game.
In other embodiments, other techniques for assisting the athlete in making a decision during the game are possible. As an example, in basketball, similar techniques may be used to analyze the position of the defender relative to the position of the teammates, and the ability of the defender and/or teammate or similar performance of the defender and/or teammate, to select and determine which teammate is best suited to receive the pass. Note that the position of the teammate relative to the basketball goal may be used as a factor in selecting the teammate. For example, if it is determined that a plurality of teammates are ready to pass, the team closest to the basketball goal may be selected to receive the pass. Alternatively, if there is an unattended team member on the three-section line, such team member may be selected to receive a pass even if there is an unattended team member close to the basketball goal. The selection of teammates to receive a pass may be based on other factors, such as past performance of the teammates. By way of example, by tracking players over time, the processing system 46 may determine the percentage of shots each player from different areas on the course. Teammates who choose to receive passes may be based on such data. As an example, processing system 46 may select (1) the teammate who is able to successfully receive a pass (as determined by his position relative to the defender's position) and (2) the team member associated with the highest percentage of shots based on the hit rate of shots made by the current position relative to other teammates who are also able to successfully obtain a pass.
It should be noted that the monitoring techniques described herein may be applied to participants of electronic competitions, which are commonly referred to as video game competitions in connection with sports. In electronic competitive activities, participants typically play against each other in a video game with a particular sport, such as baseball, football, basketball, wrestling, street fighting, etc., while spectators view the participants and the game situation. As with conventional sports, a large audience often attends or watches electronic athletic activities. At least one camera may be positioned to capture facial images of participants of the electronic athletic event. As mentioned above, such a camera may be positioned on the drone 15, but in other embodiments other positions of the camera are possible. Such facial images may be analyzed to estimate the mental state of the participant. In one embodiment, the facial images captured by the camera may be displayed within a video feed of the electronic athletic event. As an example, the facial image may be displayed to a viewer at the event or at a remote location from the event.
In addition, the video images may be analyzed by the processing system 46 to determine screen analysis, eye movements, and facial expressions in an effort to determine characteristics of a good electronic competitive athlete, including muscle memory, concentration, and reaction time. Note that the video images may be displayed to the athlete as desired. As an example, the face of one athlete (a "competitor") may be displayed to another athlete so that the other athlete may sense the mental state of their competitor during the game situation. For example, a video image of a competitor may be superimposed or otherwise combined with a video game image presented to an athlete. In particular, the video images of the competitors may be displayed within a window on the same display screen used to display the video game to the athlete. Alternatively, the video image of the competitor may be displayed separately so that it may be viewed by the athlete during the game. As an example, a competitor may be displayed on a separate display screen or in an augmented reality environment, where a video game is displayed by a physical display unit (e.g., a desktop monitor or television) while a helmet (e.g., glasses) worn by the athlete displays video images of the competitor. In other embodiments, other techniques for displaying video images of competitors are possible. Further, in addition to or instead of displaying video images of competitors, data collected about competitors may be displayed to the athlete in accordance with the same or similar techniques.
Fig. 11 depicts an exemplary system 300 for monitoring objects in a sporting arena, such as an olive field, a football field, a basketball field, and the like. For purposes of illustration, the system 300 will be described in detail in the context of monitoring a basketball player or basketball as the player or basketball is moving around the perimeter of a basketball court. However, the system 300 may be used for other sports, such as football, baseball, hockey, football, volleyball, tennis, golf, or any sport or event in which it is desirable to track a moving object.
As shown in fig. 11, the system 300 includes a sensing system 312 communicatively coupled to the processing system 46. The sensing system 312 is configured to sense an object, such as a basketball player or basketball, moving within the athletic field and provide sensor data 349 indicative of the position of the object as it moves. If desired, the sensing system 312 may reside on the drone 15, as described above, but in other embodiments the sensing system 312 may be located elsewhere. As an example, the sensing system 312 may be implemented at a fixed location near the athletic space, or the sensor sensing system 312 may be wearable such that it may be worn by an athlete participating in the sporting event. In other embodiments, other locations and configurations of the sensing system 312 are possible.
As described above, the processing system 46 is configured to receive the sensor data 349 and analyze the data 349 to determine performance parameters indicative of the athlete's performance. As an example, the sensing system 312 may sense the position of the athlete or a portion of the athlete's body, and the processing system 46 may analyze the sensor data to determine the speed, acceleration, or displacement of the athlete or a portion of the athlete's body (e.g., the hand or elbow during a basketball shot). Various performance parameters and techniques for monitoring objects in a sports field are described by the following patents: U.S. Pat. No. 8,622,832 entitled "Tracjectory Detection and Feedback System", issued 1, 7, 2014, which is incorporated herein by reference; U.S. Pat. No. 8,617,008, entitled "Training Devices for track-Based Sports" and issued 2013, 12, month 31, which is incorporated herein by reference; U.S. patent application Ser. No. 12/127,744 entitled "Stereoscic Image Capture with Performance Outcome Prediction in sports Environments," filed 27.5.2008, which is incorporated herein by reference; and U.S. Pat. No. 8,948,457 entitled "True Space Tracking of axially symmetric Object Flight Using Diameter Measurement" granted on 3.2.2015, which is incorporated herein by reference.
In one example, the processing system 46 identifies an object in free flight, such as a basketball traveling toward a basket of a basketball goal during a basketball shot, and determines the location of the object in 3D space for a series of image frames. Each such determined position will be referred to herein as a "measured trace point". Based on the measured trajectory points, processing system 46 determines a trajectory curve representing the path of movement of the object to calculate one or more performance parameters. As an example, based on the determined trajectory profile, the processing system 46 may estimate the angle at which the subject enters the basket of a basketball goal by determining the angle of the profile at a location proximate to the basket (e.g., within the plane of the basket) relative to a horizontal plane defined by the basket. Note that the processing system 46 has a limited number of measured trajectory points, depending on various factors, such as the frame rate of the camera 351 and the amount of time the object is in the field of view of the camera 351, and the processing system 46 may perform a curve fitting algorithm or other type of algorithm in the trajectory analysis to smooth the trajectory curve. The algorithm for estimating the trajectory of the object can be greatly simplified if the direction of gravity is known. Indeed, if the direction of gravity is known, the processing load for estimating the trajectory curve can be reduced, and a more accurate trajectory curve can be determined with fewer measured trajectory points.
As shown in fig. 11, the processing system 46 may be communicatively coupled to an output device 317 (e.g., a display device or an audio device (e.g., a speaker)), the output device 317 being controlled by the processing system 46 to provide feedback to the player indicating the player's performance during a basketball shot or other activity. As an example, the processing system 46 may determine performance parameters associated with a basketball shot, such as a ball-out height, a ball-out angle, a speed, an acceleration, a maximum shot height, a location of a shooter (e.g., a horizontal distance between a shooter and a basket when making the shot), a hit/miss status, or an entry angle or speed of a basketball into a basket of a basketball shot. Such performance parameters may be communicated to the athlete via output device 317. If desired, the processing system 46 may communicate with the output device 317 via a network (not shown in FIG. 11), such as the Internet or a LAN.
In an exemplary embodiment, the processing system 46 uses hit/miss status or other information in order to determine various statistics that may be used to characterize the skill level of a shooter over multiple shots. As an example, the processing system 46 may count the total number of shots a particular player makes and also count the total number of shots hit. Processing system 46 may then calculate a performance parameter based on the two counts. As an example, processing system 46 may calculate the percentage of shots hit by dividing the total number of shots hit by the total number of shots shot.
Note that the user sometimes makes a shot hit without the ball directly entering the basket. As an example, a ball may hit a basket and bounce upward, and then eventually fall into the basket to make a shot hit. Such a shot that is ejected upward from the basket but eventually passes through the basket is referred to herein as a "non-guaranteed hit". For a non-guaranteed hit, the basketball may pop out of the basket several times and eventually pass through the basket. For other shots, sometimes referred to as "hollow" shots, the basketball may pass through the basket without contacting the basket. For other shots, the basketball may contact the basket as it passes downwardly through the basket without popping out of the basket in an upward direction. A shot in which a basketball passes through a basket without being ejected from the basket in an upward direction is referred to herein as a "guaranteed hit". Note that the guaranteed hits include hollow shots where the basketball does not contact the basket, and shots where the basketball contacts the basket in a direction downward through the basket without popping up the basket in an upward direction (i.e., away from the floor of the field).
It is believed that the number of guaranteed hits may be a better skill level indicator than the total number of hits shot. In this regard, players who have a higher percentage of guaranteed hits tend to be a more stable and better pitcher. Furthermore, during any given sampling, a lower skilled athlete may appear better than his or her actual skill level due to an excessive number of non-guaranteed hits, which has less predictable results relative to guaranteed hits. Further, the guaranteed total number of hits or parameters based on the guaranteed total number of hits may constitute one or more performance parameters calculated by the processing system 46. As an example, processing system 46 may calculate the percentage of guaranteed hits by dividing the total number of guaranteed hits counted during a sampling period by the total number of shots the same player attempted during the sampling period. In other embodiments, other parameters based on the number of guaranteed hits counted by the processing system 46 are possible.
Note that performance parameters based on the number or percentage of guaranteed hits may be reported to the user as feedback. In one embodiment, performance parameters based on guaranteed hits counted by processing system 46 are used to determine the skill level of the athlete. In this regard, as part of the feedback, the processing system 46 may provide skill level assessment for a particular athlete. Such skill level assessment may be qualitative or quantitative in nature. As an example, the assessment may have various quality levels, such as "poor", "good", "very good", and "expert" levels, and the processing system 46 may use the total number of guaranteed hits during sampling as at least one factor in selecting which level is appropriate for the athlete. In this regard, a higher percentage of guaranteed hits generally results in a higher skill level being selected, according to a predefined algorithm for selecting a skill level. Skill level assessment may also be quantitative in nature, such as a score from 0 to 100 (or some other range). Generally, when the proportion of guaranteed hits obtained by an athlete is high, the athlete is given a higher score, noting that the score may also be based on other factors. In any event, the processing system 46 distinguishes between guaranteed hits and non-guaranteed hits and ultimately assigns a skill level assessment to the athlete based at least on the number of guaranteed hits counted for the athlete during the sampling period.
If desired, the processing system 46 may store the data representing the performance parameters in the memory 125 or send the data to another device for storage or analysis. These data may be analyzed at a later time to provide feedback, as described herein, or for other purposes, such as providing information about the game play. As an example, the position of the ball may be compared to the position of an object (e.g., a shelf or boundary) associated with the playing field to determine whether the ball crosses or reaches the object. In other embodiments, various other uses of the data processed by the processing system 46 are possible.
In one example of the use of the system 300 in basketball, the processing system 46 is configured to identify the triple line in the captured image. As is known in the art, a three-line is generally an arc that extends from the bottom line of the basketball court to the top of the penalty area and back to the bottom line. The processing system 46 also identifies a pitcher that pitches a basketball near the triple line. For example, by tracking the position of the players relative to the basketball, the processing system 46 may determine when one of the players throws the basketball toward the basket. The processing system 46 is configured to identify the feet of such a player and determine whether both of his feet are on one side of the triple line in the three-zone of three-thirds of the shot (i.e., the area of the basketball court outside of the area between the triple line and the bottom line). Based on the relative positions of the shooter's feet and the three-point, the processing system 46 determines a performance parameter indicative of whether the shot is a three-point ball. If any portion of his foot is on or within the triage, processing system 46 determines that the pitcher has not thrown a triage. Otherwise, the processing system 46 determines that the pitcher is pitching a triple ball. In such embodiments, the referee or other user may utilize feedback indicative of performance parameters to determine whether to award three points for the basketball shot.
In the case of a football, the position of the football may be compared to a boundary line, such as a goal line, to determine whether any portion of the football reaches or crosses the goal line. That is, based on the images captured by the sensing system 312, the processing system 46 may automatically determine whether a touchdown is scored. In such embodiments, a referee or other user may utilize feedback from the system 300 to determine whether to award points to soccer arriving at or crossing the goal line. In other embodiments, other decisions may be made based on a comparison of the object to the markers on the motion surface 382.
Note that the processing system 46 may be coupled to the sensing system 312 and/or the output device 317 via a physical connection (e.g., a wire) or wirelessly. In an exemplary embodiment, the sensing system 312 is mounted on a basketball goal, as will be described in greater detail below with reference to FIG. 3, and wirelessly transmits sensor data to the processing system 46, and the processing system 46 may include a computer system, such as a desktop computer, a laptop computer, or a handheld computer, which may be integrated with the output device 317. By way of example, a software application on a smart phone or laptop computer may implement the functionality of the processing system 46 described herein, and the processing system 46 may be implemented in hardware or any combination of hardware, software, and firmware. The smart phone may have a touch-sensitive display or speaker for implementing output devices 317 to provide visual output to the athlete or other user. In other embodiments, the processing system 46 need not be integrated with the output device 317. As an example, the output device 317 may be implemented via a display screen or audio device of a smartphone, and the processing system 46 may wirelessly send feedback information to the smartphone, which presents the feedback information to the user through the output device 317. In another embodiment, the output device 317 may be a peripheral device connected to the processing system 46. In other embodiments, other configurations are possible.
Fig. 12 depicts a processing system 46 for an embodiment, wherein the processing system 46 processes information for tracking performance of one or more athletes in an athletic space and determining a direction of gravity within the athletic space, as will be described in more detail below. In the exemplary embodiment shown in fig. 12, sensor data 349 includes image data 349 from a camera (not shown in fig. 12) and depth map 350 from a depth sensor (not shown in fig. 12), but other types of sensor data 349 may be used in other embodiments.
If desired, the sensing system 312 (FIG. 11) may include any sensors to assist in the operation and algorithms of the processing system 46. As an example, an accelerometer or other type of motion sensor may be used to provide input regarding movement of the sensing system 312 or a component of the sensing system 312 (such as the camera 351). Additionally, one or more orientation sensors, such as a tilt sensor or a gyroscope, may be used to provide information about the orientation of the sensing system 312 or a component of the sensing system 312 (such as the camera 351). The control logic 122 may use known algorithms to determine the direction of gravity based on accelerometer readings or other types of readings from motion sensors, orientation sensors, or other types of sensors. As will be described in more detail below, the control logic 33 may determine the direction of gravity based on one or more accelerometers or other types of sensors and use this information to assist in its operation.
Various types of sensing systems 312 may be used to sense the object being monitored. In one exemplary embodiment, as shown in FIG. 11, the sensing system 312 includes a camera 351 and a depth sensor 47. The camera 351 is configured to capture video images of a playing field including images of a monitored object and to provide image data 49 defining frames of the captured images. In one embodiment, the image is two-dimensional, and the depth sensor 47 is used to sense depth or in other words the distance from the sensor 47 to an object in the image. In this regard, for each frame of image data 49, depth sensor 47 provides a depth map that indicates a respective depth for each pixel of the image frame. Note that the depth sensor 47 may be oriented such that the distance measured by the depth sensor 47 is in a direction substantially perpendicular to the plane of the 2D coordinate system used by the camera 351, although other orientations of the depth sensor 47 are possible in other embodiments.
The sensing system 312 may be implemented using various types of cameras 351 and depth sensors 47. In one exemplary embodiment, the sensing system 312 is sold by Microsoft corporation
Figure BDA0003200101020000421
A camera system. In such a system, the camera 351 and the depth sensor 47 are integrated into the same housing 355 (fig. 3). The camera 351 is configured to capture a video stream comprising frames of video data, where each frame is defined by a plurality of pixels. Each pixel is associated with two coordinates, an x-coordinate and a y-coordinate, representing a position in two-dimensional space. For each frame, each pixel is assigned a color value (which may include a red component (R) value, a blue component (B) value, and a green component (G) value) that indicates the color of light received by the camera from a location in the two-dimensional space corresponding to the pixel coordinates. Further, for each pixel, the depth sensor 47 measures the distance from the sensor 47 to the real world object at the corresponding location of the pixel in 2D space. Such a distance (which, as mentioned above, may be in a direction substantially perpendicular to the plane of the 2D coordinate system used by the camera 351) may be referred to as the "depth" of the corresponding pixel. Using image data from camera 351 and depth data from depth sensor 47, the location of an object captured by camera 351 may be determined in 3D space. That is, for a point on the object, the x-coordinate and y-coordinate from the image data provided by the camera 351 indicate its position along two axes (e.g., the x-axis and the y-axis), and the depth value for that point from the depth sensor may be referred to as the "z-coordinate," indicating its position along a third axis (e.g., the z-axis). It is noted that the coordinate system defined by the three axes is independent of gravity. That is, gravity may be in any direction relative to the axis of the coordinate system, depending on the orientation of the system 312. Therefore, unless calibration is performedOtherwise the direction of gravity with respect to the coordinate system is unknown.
In that
Figure BDA0003200101020000431
In a camera system, the depth sensor 47 includes a wave emitter 363 (e.g., an infrared laser projector or other type of emitter) and a sensor 364 for sensing reflections of energy emitted by the emitter 363. The emitter 363 emits infrared radiation of various wavelengths into free space, although in other embodiments radiation of other wavelengths outside the infrared spectrum (e.g., visible light) may be emitted, and the sensor 364 senses the reflected energy to capture a video stream including frames of video data. Each frame of depth data from sensor 47 corresponds to a respective frame of image data from camera 351. In addition, a pixel in one frame of depth data corresponds to (e.g., has the same x-coordinate and y-coordinate) at least one corresponding pixel in the image data from the camera 351 and indicates a depth of the at least one corresponding pixel in the image data from the camera 351.
In this regard, for a frame of video data captured by the depth sensor 47, the depth sensor 47 converts the frame into a depth map 350 by assigning to each pixel a new color value (referred to herein as a "depth value") representing the depth of the pixel. Therefore, when displaying the depth map 350, objects displayed as the same color within the image should be at approximately the same distance from the depth sensor 47, noting that it is generally not necessary to actually display the depth map 350 during operation.
As described above, a given pixel of image data 349 from camera 351 is associated with an x-coordinate and a y-coordinate that indicate a pixel position in 2D space, and that pixel is associated with a depth value from a corresponding pixel in depth map 350 provided by depth sensor 47 that indicates the z-coordinate of the pixel. The combination of the x, y and z coordinates defines the location of the pixel in 3D space relative to the coordinate system of the camera 351. That is, the x, y, and z coordinates define the location of a point at which light measured for that pixel is reflected from the object toward the camera 351.
The fact that the direction of gravity is unknown in the coordinate system of the camera 351 is not a disadvantage in many applications of the sensing system 312. However, when the sensing system 312 is used to estimate the trajectory of an object in free flight, as described herein, knowledge of the direction of gravity relative to the position of the object is required in order to facilitate the process of estimating the trajectory of the object.
In an exemplary embodiment, the control logic 122 is configured to automatically determine a direction of gravity relative to the location indicated by the sensor data 49 in order to convert the coordinate system of the data to a gravity-based coordinate system. As used herein, a "gravity-based" coordinate system is one in which: there is a known relationship between the direction of gravity and the axes of the coordinate system for that coordinate system, such that the direction of gravity can be determined relative to any point indicated by the coordinate system. As an example, a gravity-based coordinate system may be defined such that the direction of gravity is parallel to an axis (e.g., the z-axis) of the coordinate system, although other relationships between the direction of gravity and the axis of the coordinate system may exist.
Exemplary techniques for converting sensor data 49 (e.g., image data 349 and depth map 350) from a format relative to a coordinate system of camera 351 to a format relative to a gravity-based coordinate system will be described in more detail below. In one embodiment, the sensing system 312 is positioned such that the camera 351 and depth sensor 47 have a wide field of view of a sports field including a sports surface (e.g., a surface of a field or court) on which the athletic activity is performed. For example, in a basketball, the sensing system 312 may be mounted such that the camera 351 and depth sensor 47 are positioned above the basket of a basketball goal to enable viewing of the basket and the floor of the basketball court. FIG. 13 depicts an exemplary embodiment in which the sensing system 312 is mounted above the basket 371 and backboard 373 of a basketball goal 377. By way of example, a basketball goal 377 including a backboard 373 and a basket 371 may be mounted on one or more poles 379 extending from a building ceiling or wall or other structure, and the sensing system 312 (including the camera 351 and depth sensor 47) may be mounted on at least one such pole 379 above the backboard 373. As shown in fig. 13, the basket is coupled to the backboard 373 by a bracket 383, and the mesh 384 may be coupled to and hung from the basket 371.
Further, the sensing system 312 may be oriented such that the camera 351 and the depth sensor 47 have a downward view that includes the basket 371 and at least a portion of the playing surface 382 (which in the present example is the floor of a basketball court). When the sensing system 312 is so oriented, the camera 351 and depth sensor 47 capture images of the playing surface 382 and other objects, such as the basketball goal 377, within the playing field, as shown in block 502 of FIG. 17.
Fig. 14 shows an exemplary depth map image that may be captured by the depth sensor 47 in such an embodiment. In the depth map image shown in fig. 14, pixels are colored based on depth as determined by the depth sensor 47. In this regard, the deeper the color of the pixel is in the depth map 350, the smaller the depth value of the pixel. Therefore, pixels corresponding to objects closer to the depth sensor 47 appear darker colors relative to pixels corresponding to objects farther from the depth sensor 47. As an example, because the basket 371 and backboard 373 are closer to the depth sensor 47 relative to the moving surface 382, the pixels defining the image of the basket 371 and backboard 373 are colored deeper than the pixels defining the image of the moving surface 382.
In one exemplary embodiment, control logic 122 analyzes depth map 350 to identify a motion surface (PS) plane within the image of depth map 350, as shown in block 505 of fig. 17. The PS plane generally refers to a plane parallel to a playing surface 382 (e.g., the surface of a court or field) on which the athletic activity is performed. In this regard, athletic activities are typically performed in a wide open space having a relatively flat surface, such as a field or court. Therefore, a large number of pixels in the depth map should correspond to the motion surface 382 and therefore be in the same plane. For example, when the sensing system 312 is mounted high above the moving surface, a large portion of the image may correspond to the moving surface, and pixels corresponding to the moving surface may have color values within a relatively narrow range of colors. Moreover, the control logic 122 is configured to analyze the depth map 350 to identify a plane. That is, control logic 122 is configured to identify at least one set of depth pixels that lie within the same plane. When the sensing system 312 is mounted high above the playing surface, a plane may be identified by finding a group of pixels of close depth with similar color values. However, other techniques may be used in other embodiments. As an example, the surface geometry of objects within the view of camera 351 may be analyzed based on depth pixels in order to identify depth pixels that are within the same plane. Therefore, pixels in the same plane do not have to have similar depths in order to be in the same plane.
As an example, in a volleyball game, one or more sensing systems 312 may be mounted on one or more sides of the volleyball court such that the sensing system 312 is located below the netting of the volleyball court. In such embodiments, the view of the floor of the volleyball court may be closer to the horizontal perspective than the vertical perspective, such that depth pixels corresponding to the floor of the volleyball court may have significantly different depth values as the floor extends away from the sensing system 312.
Some objects, such as portions of the goal 377, may have surfaces that are flat from the perspective of the depth sensor 352, but the size of the flat surfaces of the goal 377 is likely to be much smaller than the size of the playing surface 382 within the perspective of the depth sensor 352. For each set of depth pixels defining a plane, control logic 122 may determine a total number of depth pixels within the plane and compare the number to a threshold. If the number is below the threshold, control logic 122 may determine that the group of pixels does not correspond to motion surface 382. That is, the size of the plane represented by the pixel group is too small to represent a moving surface. A pixel group having a maximum number of depth pixels above a threshold in the same plane may be identified by control logic 122 as a pixel group corresponding to motion surface 382, hereinafter referred to as a "Floor (FP) pixel group.
Note that various sensors may be used to help identify the FP pixel groups that define the PS plane. By way of example, as described above, one or more accelerometers or other types of sensors may be used to determine an approximate direction of gravity, and such information may be used to filter the various planes identified by control logic 122 to eliminate planes that are not within a predetermined range of the direction of gravity determined by the aforementioned sensors. As an example, only pixel groups defining a plane substantially perpendicular to the direction of gravity qualify for selection as FP pixel groups, as determined by one or more accelerometers or other sensors. Once the FP pixel set is identified, it can be used to make a more accurate measurement of the direction of gravity according to the techniques described herein.
The FP pixel set may not define a perfect plane in some cases due to errors in the depth sensor 47 estimating the pixel depth or other factors (e.g., the curvature of the moving surface, if any). Control logic 122 is configured to perform a mathematical smoothing operation on the FP pixel groups to remove outliers that are far from the FP pixel groups, as indicated at block 508 of fig. 17. In one exemplary embodiment, the mathematical smoothing operation is implemented using random sample consistency, but other types of smoothing operations may be used in other embodiments.
In addition to the smoothing operation, control logic 122 also executes an algorithm referred to herein as "floor differencing" to attempt to remove depth pixels outside of the PS plane but closer to the PS plane than the outliers removed by the smoothing operation, as shown in block 511 of fig. 17. In this regard, after performing the smoothing operation, the control logic 122 analyzes the FP pixel group to estimate an initial position and orientation of the PS plane, which is referred to as the "initial PS plane. Control logic 122 then compares each depth pixel of the FP pixel group with the initial PS plane identified by control logic 122. As an example, control logic 122 may determine (1) the depth indicated by the depth pixel and (2) the depth of a point on the initial PS plane that is closest to the depth indicated by the depth pixel. If the difference is greater than a predefined Threshold (TH), control logic 122 removes the depth pixel from the FP pixel group. Thus, by performing the floor difference, depth pixels associated with locations at a distance greater than a threshold distance from the initial PS plane are removed from the FP pixel group.
After performing the floor differencing, control logic 122 again analyzes the FP pixel sets to estimate the position and orientation of the PS plane indicated by the modified FP pixel sets, thereby identifying the PS plane to be used to convert sensor data 49 into a format conversion relative to the gravity-based coordinate system. In this regard, the control logic 122 may determine that the direction of gravity is perpendicular to the identified PS plane, as indicated at block 514 of fig. 17.
Before sensor data 49 is converted, control logic 122 is configured to select an origin of the gravity-based coordinate system and define three axes: the x-axis, the y-axis and the z-axis. The axes are perpendicular to each other and each axis is defined as passing through the origin. In one embodiment, the x-axis and y-axis are defined parallel to the identified PS plane, and the z-axis is defined perpendicular to the PS plane and thus parallel to the direction of gravity. In other embodiments, other orientations of the axis relative to the direction of gravity and the PS plane are possible.
To facilitate the calculation of performance parameters, control logic 122 is configured to define a relationship between the gravity-based coordinate system and the athletic movement environment. By way of example, to determine the angle at which a basketball enters the basket 371, the control logic 122 should know the position of the basketball relative to the basket 371 as the basketball travels along the trajectory. This may be accomplished by determining a relationship between at least one reference point (e.g., an origin) in the gravity-based coordinate system and at least one reference point in the athletic motion environment. By doing so, the position of any object, such as a player or basketball, sensed by the sensing system 312 relative to other objects in the playing environment, such as the basket 371, may be automatically determined.
Note that any point in the game environment may be used as a reference for the gravity-based coordinate system. By way of example, within the image data 349, boundary lines or other court markers on the floor of a basketball court may be identified and the identified markers used to reference the gravity-based coordinate system to the game environment. However, the type or style of the mark may vary from court to court. On the other hand, the basketball basket 371 typically has a consistent size and shape to facilitate identification of the basket 371 within images provided by the sensing system 312.
The control logic 122 is configured to identify a reference object (e.g., basketball basket 371) in the image provided by the sensing system 312 and reference a gravity-based coordinate system based on the identified object, as shown at block 515 of fig. 17. In one exemplary embodiment, the control logic 33 is configured to position the basketball basket 71 in the image and define a gravity-based coordinate system such that its origin is located at the center of the basket 371. It is noted that the plane of basket 371 should be parallel to the PS plane identified by control logic 122. Since the x-axis and the y-axis are defined to be parallel to the PS plane, the x-axis and the y-axis should be in the plane of the basketball basket 371 when the origin is positioned at the center of the basket 371. Further, when the origin is thus defined, the z-axis passes through the center of the basket 371 in a direction parallel to the gravity.
To facilitate installation of sensing system 312, sensing system 312 may be installed at any height above moving surface 382 and basket 371. Without knowing the distance of basket 371 from depth sensor 47, controller logic 122 is configured to analyze depth map 350 from depth sensor 47 to estimate the distance. Before estimating the distance, the control logic 122 first positions the basketball basket 371 within the image. An exemplary technique for identifying the basket 371 will be described in more detail below.
In one exemplary embodiment, control logic 122 is configured to identify a pixel group, referred to hereinafter as a "Basketball Goal (BG) pixel group," that does not include pixels corresponding to motion surface 382, thereby removing a significant number of pixels from the BG pixel group. As an example, control logic 122 may execute an algorithm similar to the backplane difference algorithm described above on all depth pixels of depth map 350. However, rather than removing depth pixels that are greater than a Threshold (TH) distance from the PS plane, control logic 122 instead removes depth pixels that are less than the threshold distance from the PS plane and holds depth pixels that are greater than the threshold distance from the PS plane.
Fig. 15 shows an exemplary depth map image after a floor disparity has been performed in order to remove depth pixels corresponding to the PS plane. As shown in fig. 15, the depth map image includes an image 401 of basket 71, an image 402 of net 384 coupled to basket 371, an image 403 of backboard 373 on which basket 371 is mounted, and an image 404 of brace 383 for coupling basket 371 to backboard 373. As can be seen from the view above the shelf 377, the shelf 383 may appear substantially rectangular, as shown in fig. 15, although other shapes are possible.
The control logic 122 searches the image of the basket in the depth map image to identify the basket image 401. When the basket image 401 is found, the control logic 122 determines the size (e.g., diameter) of the basket image 401. Various techniques may be used to determine the size of the basket image 401. In an exemplary embodiment, control logic 122 superimposes a scalable basket template 411 on basket image 401 as shown in fig. 16 (note that template 411 is shown in red in fig. 16). The diameter of the basket image 411 is adjusted so as to maximize the number of pixels covered by the basket template 411 in the basket image 401. Since the actual diameter of basket 371 is known (about 18 inches for standard size baskets), the distance of depth sensor 47 from basket 371 can be calculated based on the diameter of template 411.
The control logic 122 may calibrate the trajectory calculation using basket diameters in the basket image 401 to account for the position of the sensing system 312. In this regard, for accurate trajectory calculation, the control logic 33 should be aware that the scaling factor is used to correlate distances in the image with physical distances in the real world. As one example, a distance of one-half inch in a captured image may represent a distance of several feet (or some other distance) in the real world. The scaling factor between the real world dimension and the dimension within the captured image is typically based on several factors, including the position of the sensing system 312 relative to the object appearing in the image and the zoom or magnification of the camera used to capture the image. In one exemplary embodiment, the control logic 122 determines how the distance in the captured image relates to the real world distance or scales based on the basket image 401. In this regard, as noted above, the diameter of real world baskets is typically the same (i.e., about 18 inches) on each rack. Thus, based on the diameter of the basket in the image 401, the control logic 122 may determine an appropriate scaling factor for converting the distance in the captured image to a real-world distance. In other embodiments, other types of objects having known dimensions may be used in place of baskets. For example, certain court markers (e.g., the length of the penalty lines) may be known, and images of such court markers may be used to determine an appropriate zoom factor. Also, the distance from the basket 371 to the moving surface 382 is generally known and can be used as a reference for determining the scaling factor. In other embodiments, other types of objects and dimensions may also be used to determine the appropriate scaling factor.
Additionally, the control logic 122 is further configured to orient the gravity-based coordinate system based on the image of the sensor data 49, as indicated at block 517 of fig. 17. To accomplish this in one embodiment, the control logic 122 is configured to identify images of the shelf 383 in a BG pixel group. As shown in fig. 15, the area around the basket image 401 should have substantially no depth pixels except the area where the stent image 404 is located, due to the above-described bottom plate difference. Thus, even for different shapes and configurations of the stent 383, the process of finding the stent image 404 should be relatively simple and reliable. After identifying the stent image 404, the control logic 122 is configured to orient the axis of the gravity-based coordinate system based on the position of the stent image 404 relative to the basket image 401. As an example, control logic 122 may define one of the axes (e.g., the x-axis) such that it passes through the center of basket 371 and the center of shelf 383.
After orienting the gravity-based coordinate system, control logic 122 is configured to convert image data 349 and depth map 350 from a format relative to the coordinate system of camera 351 to a format relative to the gravity-based coordinate system, as shown in block 522 of fig. 17. Thus, the pixel coordinates of the image data are converted to the origin relative to the gravity-based coordinate system rather than the origin of the camera coordinate system. It should be noted that various changes and modifications to fig. 17 will become apparent to those skilled in the art upon reading this disclosure. Further, any of the steps of fig. 17 may be omitted and/or the order of any of the steps may be rearranged as desired.
Since the distance between the sensing system 312 and the origin of the gravity based system is known, the position of any object in the image data 349 can be calculated relative to the basket 371 or other objects in the competition environment. As one example, the trajectory of a basketball may be compared to the position of the basket 371 in order to determine the angle at which the basketball enters the basket. In another example, by knowing the position of basket 371 relative to sensing system 312, the location of a particular court marker within the image, such as a penalty line, can be determined because the marker of a standard basketball court should be a predetermined distance and direction from basket 371. Thus, the position of the object relative to the penalty line position can be determined. For example, the control logic 122 may determine that a player is playing a penalty kick based on the position of the player relative to the penalty kick line determined from the image data when the player launches a basketball into the basket 371.
It should be noted that the above-described exemplary processes for calibrating a gravity-based coordinate system and converting the sensor data 49 into a format relative to the gravity-based coordinate system may be performed automatically and efficiently without any human intervention and without significant processing burden relative to other techniques that may be present for calibrating a coordinate system. Thus, the process may be repeated as much as possible during operation. For example, if the sensing system 312 is struck by a basketball causing the camera 351 and sensor 47 to move, the gravity-based coordinate system may be automatically and quickly recalibrated according to the techniques described herein.
Additionally, in several of the examples described above, the direction of gravity is assumed to be perpendicular to the PS plane identified by control logic 122. However, other gravitational directions relative to the identified PS plane are possible. For example, certain moving surfaces may be inclined for various reasons, such as to facilitate drainage from the surface. For example, a football pitch typically has a "crown" in the middle of the pitch, which slopes downwardly away from the crown as it approaches the sidelines. Thus, the portion of the field near the sideline may be inclined such that the direction of gravity is inclined relative to the surface in the inclined region of the field. In some cases, the slope of the surface may increase closer to the edge.
In an exemplary embodiment, the control logic 122 is configured to consider the inclined surface when determining the direction of gravity. Note that there are various techniques that can be used to illustrate the slope of a surface. By way of example, processing system 46 may store data, referred to herein as "surface data 252" (FIG. 12), representing the slope of the moving surface at one or more points. For example, for each of a plurality of locations on the moving surface, the surface data 252 may have a value that indicates a degree to which the surface is tilted, such as indicating an angle of a direction of gravity relative to the moving surface at such a location. Such data may be predefined and stored in memory 125 prior to normal operation of processing system 46. As an example, during a calibration process, at least one image of the moving surface may be captured with camera 351 and depth sensor 47, and the image may be analyzed or otherwise determined by control logic 122 for the slope of the moving surface at various locations. In this regard, as described above, depth pixels from depth map 350 of depth sensor 47 may be correlated with pixels of image data 349 from camera 351, and the depths indicated by the depth pixels may be used to calculate the slope of the moving surface at different locations in the image captured by camera 351. That is, control logic 122 effectively maps the moving surface during calibration so that the slope (relative to gravity) of the moving surface at different locations is indicated by data 252. In this calibration process, the direction of gravity may be determined based on manual input (e.g., a user may provide input indicating the direction of gravity within the image) or by finding an object of known orientation within the image, as described above for the basketball basket. In other embodiments, as described above, the sensing system 312 may have a sensor, such as an accelerometer or other type of sensor, that may be used to sense the direction of gravity.
In one exemplary embodiment, sensing system 312 is coupled to aerial vehicle 255, as shown in FIG. 18, in order to perform the calibration process described above, wherein the moving surface is mapped to determine its surface topology. As described above, the aircraft 255 may include an unmanned aerial vehicle 15 or other type of aircraft that flies above a moving surface, allowing the camera 351 and depth sensor 47 to capture images of the moving surface as the aircraft 255 flies. If desired, the aerial vehicle 255 may be coupled to a tether that holds the aerial vehicle 255 in the air and/or guides the aerial vehicle 255 as the aerial vehicle 255 moves. In other embodiments, the aircraft 255 may not be restricted such that it may fly freely under the direction of a pilot or remote control. In such embodiments, camera 351 captures images of motion surface 382 from a position above motion surface 382, and depth sensor 47 measures the depth or distance from the motion surface. Each pixel of the image captured by the camera 351 is associated with a depth value representing the distance from the surface point represented by the pixel to the sensing system 312. Based on such depth values, the slopes of the surface at various locations may be calculated and stored in surface data 252 for later use in determining the direction of gravity, as described above.
During operation subsequent to the calibration process described above, the control logic 122 may be configured to determine the position of the moving surface within the image before making a decision regarding the direction of gravity. As an example, for a given depth map 350, control logic 122 may analyze a corresponding set of image data 349 to determine the relative position of the moving surface within the image defined by the set of image data 349. As an example, based on being on a boundary marker (e.g., a football pitch edge) within the image, control logic 122 may determine that the position of the athletic surface within the image is near an edge where the athletic surface is significantly tilted. Based on surface data 252, control logic 122 determines the degree of surface slope at such locations and calculates or otherwise determines the direction of gravity based on the slope. Specifically, the control logic 122 accounts for the slope by assigning a direction of gravity at an oblique angle relative to the moving surface at the identified location based on the surface data 252. Thus, even if the image used by control logic 122 is an oblique region of the moving surface, the direction of gravity determined by logic 122 should be accurate.
Note that the sensing system 312 coupled to the aerial vehicle 255 may be used in the manner described above to monitor athletes on a playing surface in accordance with the techniques described above. The above-described algorithm for determining the direction of gravity based on images captured by camera 351 and depth sensor 47 may be particularly useful for such embodiments. In this regard, the orientation of sensing system 312 with respect to gravity may change frequently and abruptly while aircraft 255 is in flight. The algorithm for determining the direction of gravity based on the camera 351 and the depth sensor 47 may be performed repeatedly and frequently (such as multiple times per second) while consuming a relatively low amount of processing resources, but still providing a very accurate estimation of the direction of gravity. These characteristics may be beneficial in various other applications.
In calculating the trajectory of a moving object, it can generally be assumed that the force exerted by gravity on such an object is constant. However, the magnitude of such forces generally varies with altitude. For example, gravity may be slightly different in magnitude for events occurring in mountainous areas relative to events occurring near sea level. In one exemplary embodiment, the processing system 46 is configured to take into account changes in altitude when performing trajectory calculations.
In this regard, the processing system 46 is configured to store gravity data 352 (fig. 12) indicative of the magnitude of gravity at various altitudes. Additionally, during operation, control logic 122 is configured to determine an approximate altitude of the event being monitored by processing system 46. As an example, the user may simply enter the altitude of the event through an input device (not shown) of the processing system 46, such as a keyboard, keypad, or mouse, or the processing system 46 may receive such information wirelessly via the wireless communication interface 145. Alternatively, sensing system 312 may have a sensor (not shown), for example as an altimeter or a location sensor (e.g., a GPS sensor), which may be used to automatically determine an approximate altitude of at least one component of system 300, and thus an approximate altitude of an event in which system 300 is located. In other embodiments, other techniques for determining altitude are possible.
After determining the altitude, the control logic 122 is configured to consult the gravity data 352 to determine the magnitude of gravity to be used for trajectory calculations. As an example, the data 352 may be implemented as a table of altitude values and gravity values, and the control logic 122 may use the altitude values received from the sensing system 312 or otherwise obtained by the control logic 122 as keywords for finding appropriate gravity values for trajectory calculation. In other embodiments, the control logic 122 may be configured to algorithmically calculate an appropriate gravity value based on the determined altitude. In other embodiments, other techniques for determining suitable gravity values for trajectory calculations are possible. By determining the gravity value based on the actual altitude of the monitored event, a more accurate trajectory calculation may be achieved, thereby improving the performance of the system 300.
Various embodiments of a monitoring system are described above in the context of basketball. It should be emphasized that similar techniques may be used in other motions to define a gravity-based coordinate system and to convert sensor data into a format relative to the gravity-based coordinate system. As an example, for football, the sensing system 312 may be positioned such that the doorpost and surface of a football pitch are within the field of view. Using techniques similar to those described above for basketball, a surface plane corresponding to the surface of a football pitch can be identified and used to determine the direction of gravity. Furthermore, the shape of the goal post may be used to orient the gravity-based coordinate system relative to the boundaries and markers of the soccer field. The hockey door may similarly be used to orient gravity-based coordinate systems. Similar techniques may be used to define and orient the gravity-based coordinate system in other motions.

Claims (16)

1. A system for monitoring athlete performance during a sporting event, comprising:
at least one camera configured to capture images of an object emitted by a first athlete while participating in a sporting event; and
at least one processor configured to determine a position of the first player when the object is launched by the first player based on the images captured by the at least one camera and to analyze the captured images to determine a trajectory of the object launched by the first player, the at least one processor configured to determine a target zone of the object based on the determined position of the first player and to compare the trajectory to the target zone, wherein a size of the target zone is based on the determined position of the first player, the at least one processor configured to determine a value indicative of a performance of the first player when launching the object based on the comparison of the trajectory to the target zone, the at least one processor further configured to provide feedback indicative of the performance.
2. The system of claim 1, wherein the at least one processor is configured to determine a distance between the determined position of the first player and the target zone, and determine a size of the target zone based on the distance.
3. The system of claim 1, wherein the at least one processor is configured to determine the location of the target zone based on movement of at least one player in the sporting event.
4. The system of claim 3, wherein the at least one athlete comprises a recipient that receives the object from a first athlete.
5. The system of claim 4, wherein the at least one processor is configured to determine whether the object is grasped by the recipient.
6. The system of claim 4, wherein the at least one processor is configured to determine the location of the target area based on a velocity of the recipient.
7. The system of claim 4, wherein the at least one processor is configured to determine a location and a velocity of the recipient and to determine the location of the target area based on the location and the velocity of the recipient.
8. The system of claim 7, wherein the at least one processor is configured to determine a distance between the determined position of the first player and the target zone, and determine a size of the target zone based on the distance.
9. The system of claim 1, wherein the at least one processor is configured to determine whether the trajectory intersects the target region, and wherein the value is based on whether the trajectory is determined to intersect the target region.
10. The system of claim 1, wherein the at least one processor is configured to determine whether a plurality of trajectories of one or more objects launched by the first athlete respectively intersect a plurality of target zones, and wherein the value represents a percentage of a number of trajectories that intersect a respective one of the plurality of target zones.
11. A method for monitoring athlete performance during a sporting event, comprising:
capturing, with at least one camera, an image of an object emitted by a first athlete while participating in a sporting event; and
determining a position of a first player while the object is being launched by the first player based on images captured by the at least one camera;
analyzing the captured image to determine a trajectory of the object launched by the first player;
determining a target zone of the object based on the determined position of the first player, wherein a size of the target zone is based on the determined position of the first player;
comparing the trajectory to the target area;
determining a value indicative of a first athlete's performance in launching the object based on the comparison; and
providing feedback indicative of the performance.
12. The method of claim 11, further comprising:
determining a distance between the determined position of the first player and the target zone; and
determining a size of the target region based on the distance.
13. The method of claim 11, further comprising determining a location of the target zone based on movement of at least one player in the sporting event.
14. The method of claim 13, the at least one athlete comprising a recipient receiving the object from a first athlete.
15. The method of claim 14, wherein the determination of the location of the target area is based on a velocity of the recipient.
16. The method of claim 14, further comprising determining a location and a velocity of the recipient, and wherein determining the location of the target area is based on the location and the velocity of the recipient.
CN202110901640.XA 2016-02-19 2017-02-21 System and method for monitoring athlete performance during a sporting event Active CN113599788B (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US201662297528P 2016-02-19 2016-02-19
US62/297,528 2016-02-19
CN201780024691.0A CN109069903B (en) 2016-02-19 2017-02-21 System and method for monitoring objects in a sporting event
PCT/US2017/018725 WO2017143341A1 (en) 2016-02-19 2017-02-21 Systems and methods for monitoring objects at sporting events

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
CN201780024691.0A Division CN109069903B (en) 2016-02-19 2017-02-21 System and method for monitoring objects in a sporting event

Publications (2)

Publication Number Publication Date
CN113599788A CN113599788A (en) 2021-11-05
CN113599788B true CN113599788B (en) 2023-03-28

Family

ID=59626334

Family Applications (2)

Application Number Title Priority Date Filing Date
CN201780024691.0A Active CN109069903B (en) 2016-02-19 2017-02-21 System and method for monitoring objects in a sporting event
CN202110901640.XA Active CN113599788B (en) 2016-02-19 2017-02-21 System and method for monitoring athlete performance during a sporting event

Family Applications Before (1)

Application Number Title Priority Date Filing Date
CN201780024691.0A Active CN109069903B (en) 2016-02-19 2017-02-21 System and method for monitoring objects in a sporting event

Country Status (2)

Country Link
CN (2) CN109069903B (en)
WO (1) WO2017143341A1 (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110141845A (en) * 2019-06-10 2019-08-20 湖南大狗科技有限公司 A kind of cycle racing rail safety management monitoring system based on unmanned plane
WO2021031159A1 (en) * 2019-08-21 2021-02-25 深圳市大疆创新科技有限公司 Match photographing method, electronic device, unmanned aerial vehicle and storage medium
CN113546386A (en) * 2020-09-30 2021-10-26 深圳华锐互动科技有限公司 Ball game training board subassembly
CN112933574B (en) * 2021-01-27 2022-05-24 北京驭胜晏然体育文化有限公司 Multi-split indoor ski game control method and system and readable storage medium
CN112802051B (en) * 2021-02-02 2022-05-17 新华智云科技有限公司 Fitting method and system of basketball shooting curve based on neural network
WO2023181419A1 (en) * 2022-03-25 2023-09-28 三菱電機株式会社 Golf assistance system, moving body, server device, golf assistance method, and golf assistance program
WO2023218627A1 (en) * 2022-05-13 2023-11-16 三菱電機株式会社 Golf assistance system, golf assistance method, and golf assistance program

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10360685B2 (en) * 2007-05-24 2019-07-23 Pillar Vision Corporation Stereoscopic image capture with performance outcome prediction in sporting environments
US7843470B2 (en) * 2005-01-31 2010-11-30 Canon Kabushiki Kaisha System, image processing apparatus, and information processing method
US7725257B2 (en) * 2006-09-05 2010-05-25 Honeywell International Inc. Method and system for navigation of an ummanned aerial vehicle in an urban environment
US8328653B2 (en) * 2007-09-21 2012-12-11 Playdata, Llc Object location and movement detection system and method
US8355042B2 (en) * 2008-10-16 2013-01-15 Spatial Cam Llc Controller in a camera for creating a panoramic image
US8416282B2 (en) * 2008-10-16 2013-04-09 Spatial Cam Llc Camera for creating a panoramic image
US8628453B2 (en) * 2008-12-05 2014-01-14 Nike, Inc. Athletic performance monitoring systems and methods in a team sports environment
CA3043730A1 (en) * 2009-03-27 2010-09-30 Russell Brands, Llc Monitoring of physical training events
JP2014165706A (en) * 2013-02-26 2014-09-08 Sony Corp Signal processing device and recording medium
US20150373306A1 (en) * 2014-06-20 2015-12-24 OnDeck Digital LLC Real-time video capture of field sports activities
WO2015200209A1 (en) * 2014-06-23 2015-12-30 Nixie Labs, Inc. Wearable unmanned aerial vehicles, launch- controlled unmanned aerial vehicles, and associated systems and methods

Also Published As

Publication number Publication date
CN113599788A (en) 2021-11-05
WO2017143341A1 (en) 2017-08-24
CN109069903A (en) 2018-12-21
CN109069903B (en) 2021-08-20

Similar Documents

Publication Publication Date Title
US11450106B2 (en) Systems and methods for monitoring objects at sporting events
CN113599788B (en) System and method for monitoring athlete performance during a sporting event
US11541294B2 (en) Golf aid including heads up display for green reading
US10661149B2 (en) Mixed-reality sports tracking and simulation
US11752417B2 (en) Electronic tracking system with heads up display
EP2977087B1 (en) Athletic activity heads up display systems and methods
US11596852B2 (en) Swing alert system and method
CN112969513B (en) System and method for determining reduced athlete performance in a sporting event
US20150343292A1 (en) Golf aid including virtual caddy
KR20220047863A (en) System, apparatus and method for master clock and composite image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant