US20200310630A1 - Method and system of selecting a view from a plurality of cameras - Google Patents

Method and system of selecting a view from a plurality of cameras Download PDF

Info

Publication number
US20200310630A1
US20200310630A1 US16/903,224 US202016903224A US2020310630A1 US 20200310630 A1 US20200310630 A1 US 20200310630A1 US 202016903224 A US202016903224 A US 202016903224A US 2020310630 A1 US2020310630 A1 US 2020310630A1
Authority
US
United States
Prior art keywords
interest
cameras
camera
area
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US16/903,224
Inventor
Rudolf O. Ernst
James White
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Pixia Corp
Original Assignee
Pixia Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Pixia Corp filed Critical Pixia Corp
Priority to US16/903,224 priority Critical patent/US20200310630A1/en
Assigned to PIXIA CORP. reassignment PIXIA CORP. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: ERNST, RUDOLF O., WHITE, JAMES
Publication of US20200310630A1 publication Critical patent/US20200310630A1/en
Assigned to ALTER DOMUS (US) LLC reassignment ALTER DOMUS (US) LLC SECOND LIEN SECURITY AGREEMENT Assignors: CUBIC CORPORATION, NUVOTRONICS, INC., PIXIA CORP.
Assigned to BARCLAYS BANK PLC reassignment BARCLAYS BANK PLC FIRST LIEN SECURITY AGREEMENT Assignors: CUBIC CORPORATION, NUVOTRONICS, INC., PIXIA CORP.
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/292Multi-camera tracking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04842Selection of displayed objects or displayed text elements
    • G06K9/4604
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/001
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/90Arrangement of cameras or camera modules, e.g. multiple cameras in TV studios or sports stadiums
    • H04N5/247
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20092Interactive image processing based on input by user
    • G06T2207/20104Interactive definition of region of interest [ROI]

Definitions

  • the present invention pertains to image data management and in particular to a method and system of selecting a view or an area of interest (AOI) from a plurality of images captured by a plurality of cameras.
  • AOI area of interest
  • a relatively large image generally contains a plurality of pixels, e.g., millions of pixels.
  • Each pixel has one, two or more bands.
  • Each band has a certain color depth or bit depth.
  • an RGB color-based image has 3 bands, the red band (R), the green band (G) and the blue band (B).
  • R, G and B bands can have a depth of 8 bits or more.
  • each pixel can have a total bit depth of 24 bits or more.
  • a video or film camera captures a plurality of images or frames at a certain rate, i.e., a number of times per second (frames per second).
  • Each captured image or frame in a digital form, has a certain size defined by its pixel height and its pixel width. Pixel height corresponds to a number of rows of pixels in the image and pixel width corresponds to the number of columns in the image.
  • Each image pixel has one or more bands.
  • Each band can represent a quantized signal from a range of frequencies from the electromagnetic spectrum. For example, in the visible spectrum, the bands can represent different colors such as red, blue, and green (RGB).
  • An image sensor or camera can be used to capture a series of images or frames. Each image or frame in the series of images or frames can have thousands of pixels. Each image may have a relatively high resolution, such as 4K, 6K, 8K or more. As understood in the art, a 4K resolution refers to content or image(s) having horizontal resolution on the order of 4,000 pixels. Several 4K resolutions exist in the fields of digital television and digital cinematography. In the movie projection industry, DIGITAL CINEMA INITIATIVES (DCI) is the dominant 4K standard. A 4K resolution, as defined by DCI, is 4096 pixels ⁇ 2160 pixels (approximately a 1.9:1 aspect ratio). An image of 4096 pixels by 2160 pixels has about 9 Megapixels (MP). As specified in standards for Ultra High Definition television, 4K resolution is also defined as 3840 pixels ⁇ 2160 pixels (approximately a 1.78:1 aspect ratio). The following TABLE 1 provides examples of known standards for relatively high resolution images captured by industry standard camera sensors.
  • the images may be captured in sequence, for example at a reasonably constant frequency (e.g., 24 images or frames per second (fps), 48 fps, 60 fps, etc.).
  • Each image (i.e., each still image or frame) in the sequence or series of images may have one or more distinct bands and may cover any part of the electromagnetic spectrum that can be captured by the image sensor or camera, including, the visible (VIS) spectrum, the infrared (IR) spectrum or the ultraviolet (UV) spectrum.
  • the image sensor or camera may be a single sensor or a combination or a matrix of multiple smaller sensors or cameras that can be arranged to generate a larger image. Each smaller sensor or camera can be configured to capture a plurality of smaller images. The smaller images captured by the smaller sensors or cameras can then be combined (or stitched) to form a plurality of larger images.
  • 4K and 8K In the media and entertainment industry, on network television, on cable television, on broadcast television, and on digitally distributed video content, some of the highest image pixel resolutions are known as 4K and 8K.
  • 4K is an image frame that is 4096 pixels wide by 2160 pixels tall.
  • 4K is an image frame that is 3840 pixels wide by 2160 pixels tall.
  • 8K image frame sizes are also gaining ground.
  • the most popular distribution formats are still 720P and 1080P which have image frame sizes of 1280 pixels wide by 720 pixels tall and 1920 pixels wide by 1080 pixels tall, respectively.
  • 4K media playback devices have reached the mainstream market. These devices can also interpolate and display 1080P content.
  • none of the known image technologies or systems are able to provide or deliver a plurality of images or a video where an object in physical space captured in one or more images of the plurality of images can be mapped to pixels of the one or more images.
  • An aspect of the present invention is to provide a method of selecting an area of interest from a plurality of images captured by a plurality of cameras, the method being implemented by a computer system that includes one or more processor units configured to execute computer program modules.
  • the method includes receiving, by the one or more processor units, a request for an area of interest from a plurality of images, the area of interest containing an object of interest selected by a user, the object of interest being captured in one or more images captured by one or more of the plurality of cameras, the plurality of cameras having overlapping field of views such that images captured by the plurality of cameras have a plurality of overlapping regions, the request containing an identity of the object of interest; determining, by the one or more processor units, a camera within the plurality of cameras that wholly contains the area of interest using the physical location of the object of interest; transforming, by the one or more processor units, the physical location of the object of interest into a pixel position within an image of a specific camera within the plurality of cameras such that the area of interest being
  • Another aspect of the present invention is to provide a method of extracting an area of interest containing an object of interest from a plurality of images captured using a plurality of cameras, the method being implemented by a computer system that includes one or more processor units configured to execute computer program modules.
  • the method includes determining, by the one or more processor units, using the data of the locator or the physical location of the object of interest, a camera within the plurality of cameras that wholly contains the area of interest, the area of interest containing the object of interest, the object of interest being captured in one or more images captured by one or more of the plurality of cameras, the plurality of cameras having overlapping field of views such that images captured by the plurality of cameras have a plurality of overlapping regions; transforming, by the one or more processor units, the physical location of the object of interest into a pixel position within an image of a specific camera within the plurality of cameras such that the area of interest is essentially centered around the pixel position corresponding to the physical location of the object of interest, and is wholly contained within the image captured by the specific camera; and extracting, by the one or more processor units, the area of interest substantially centered around the pixel position corresponding to the physical location of the object of interest.
  • a further aspect of the present invention is to provide a computer system for selecting an area of interest from a plurality of images captured by a plurality of cameras.
  • the system includes one or more processor units configured to: receive a request for an area of interest from a plurality of images, the area of interest containing an object of interest selected by a user, the object of interest being captured in one or more images captured by one or more of the plurality of cameras, the plurality of cameras having overlapping field of views such that images captured by the plurality of cameras have a plurality of overlapping regions, the request containing an identity of the object of interest; determine a camera within the plurality of cameras that wholly contains the area of interest using the physical location of the object of interest; transform the physical location of the object of interest into a pixel position within an image of a specific camera within the plurality of cameras such that the area of interest being essentially centered around the pixel position corresponding to the physical location of the object of interest, and is wholly contained within the image captured by the specific camera; and extract the area of interest substantially
  • FIG. 1 depicts schematically a moving object being tracked by a tracking device attached to the object and a transmitter attached to the object transmitting tracking information, according to an embodiment of the present invention
  • FIG. 2 depicts schematically the transmitter attached to the object transmitting the tracking information as a data stream to a receiver device, according to an embodiment of the present invention
  • FIG. 3 depicts schematically the receiver device connected to a computer system that receives the data stream and stores it on a storage device, according to an embodiment of the present invention
  • FIG. 4 depicts schematically a video camera, according to an embodiment of the present invention
  • FIG. 5 depicts schematically the video camera collecting an image at an instant in time, according to an embodiment of the present invention
  • FIG. 6 depicts schematically the video camera collecting an image at an instant in time where the camera is static (fixed in position) and collecting the image at an incident angle, and the image having a perspective view of object(s) being captured in the image, according to an embodiment of the present invention
  • FIG. 7 depicts schematically the video camera collecting an image at an instant in time where the camera is static and collecting the image looking down on the object(s), and the image is a top-view of the field of view, according to an embodiment of the present invention
  • FIG. 8 depicts schematically the video camera having sensors that provide a position of the camera, for example, GPS sensor, IMU sensor and/or any other type of sensors such as visual sensor, according to an embodiment of the present invention
  • FIG. 9 depicts schematically an object being tracked and captured in an image at a location at an instant in time, in the field of view of the camera with a known location and orientation, the pixel position of the object in the image at that instant in time can be computed from the physical location of the object, according to an embodiment of the present invention
  • FIG. 10 depicts a conversion from the physical location of the object, within the field of view of the camera, being tracked or captured, to the pixel location of the object within an image captured by the camera at substantially the same instant in time, according to an embodiment of the present invention
  • FIG. 11 depicts schematically a video camera with various sensors that determine the position of the camera including, a GPS sensor, an IMU sensor or other types of sensors, according to an embodiment of the present invention
  • FIG. 12 depicts schematically a video camera arranged in a matrix of a plurality of cameras such that the field of view of one camera substantially overlaps the field of view of an adjacent camera by a known amount, according to an embodiment of the present invention
  • FIG. 13 depicts an image I Ca captured by the camera, the image having a pixel width W and a pixel height H, according to an embodiment of the present invention
  • FIG. 14 depicts an image I D selected for display on a display device D, where the pixel width of image I D is D W pixels and the pixel height of image I D is D H pixels, where I D is a portion of image I Ca captured by the camera such that pixel width W>D W and pixel height H>D H , according to an embodiment of the present invention
  • FIG. 15 depicts schematically an example of image I D and image I Ca such that W/2>D W and H/2>D H , that is the pixel width and pixel height of the displayed area of interest from the camera image is at least half the pixel width and pixel height of the image captured by the camera, according to an embodiment of the present invention
  • FIG. 16 depicts schematically an example of two adjacent cameras, each camera generating an image of pixel width W and pixel height H, with a horizontal overlap of O W pixels where O W ⁇ D W , according to an embodiment of the present invention
  • FIG. 17 depicts schematically an example of two adjacent cameras, each camera generating an image of pixel width W and pixel height H, with a horizontal overlap of O W pixels where O W ⁇ D W , and four examples of an area of interest D 1 , D 2 , D 3 and D 4 such that each area of interest is wholly contained within the image captured by at least one camera, as a result of the overlap, according to an embodiment of the present invention;
  • FIG. 18 depicts schematically an example of two adjacent cameras, each generating an image of pixel width W and pixel height H, with a vertical overlap of O H pixels where O H ⁇ D H , according to an embodiment of the present invention
  • FIG. 19 depicts an example of two adjacent cameras, each generating an image of pixel width W and pixel height H, with a vertical overlap of O H pixels where O H ⁇ D H , and depicts 4 examples of an areas of interest D 1 , D 2 , D 3 and D 4 such that each area of interest is wholly contained within the image captured by at least one camera, as a result of the overlap, according to an embodiment of the present invention
  • FIG. 20 depicts schematically images generated by a matrix of a plurality of cameras where the cameras are configured such that the pixel data in one image generated by one camera is substantially identical to the pixel data of one image generated by an adjacent camera due to overlapping fields of view of one camera with the adjacent camera, according to an embodiment of the present invention
  • FIG. 21 depicts schematically a pixel in an aggregated image generated from the matrix of cameras mathematically maps to a pixel within one of the constituent images from one or more of the cameras, according to an embodiment of the present invention
  • FIG. 22 depicts schematically one image captured at one instant in time by a camera, resulting in the capture of a plurality of temporally sequential images captured over time at a known frame rate, according to an embodiment of the present invention
  • FIG. 23 depicts a conversion of a plurality of temporally sequential images I Ca captured by the camera into a video stream V Ca in a known format and encoding, according to an embodiment of the present invention
  • FIG. 24 depicts schematically a video stream V Ca in a known format and encoding being decoded into a plurality of temporally sequential images I Ca(decoded) that are substantially identical to the images originally captured by the camera and converted into the video stream V Ca , according to an embodiment of the present invention
  • FIG. 25 depicts schematically a desired area of interest or desired region D extracted from a plurality of decoded temporally sequential images I Ca(decoded) captured over time T, according to an embodiment of the present invention
  • FIG. 26 depicts schematically a plurality of images representing the desired area of interest or desired region D extracted from a plurality of decoded temporally sequential images I Ca(decoded) captured over time T being converted into a video codestream V ID of a known format and encoding, according to an embodiment of the present invention
  • FIG. 27 depicts schematically a plurality of cameras arranged in a matrix such that the field of view of one camera substantially overlaps the field of view of an adjacent camera, and each camera generates a video containing some pixels in the video from one camera that are substantially identical to some pixels in the video from an adjacent camera because of overlapping fields of view, according to an embodiment of the present invention
  • FIG. 28 depicts schematically an object being tracked within the field of view of at least one camera, at any given instant in time, and the physical location of the object can be represented by one or more pixels within the video generated by the at least one camera, according to an embodiment of the present invention
  • FIG. 29 depicts schematically an object being tracked within the field of view of at least one camera, at any given instant in time, and the physical location of the object can be accurately represented by one or more pixels within the video generated by the camera, and depicts a pixel width and a pixel height of an overlapping region greater than or equal to the pixel width and pixel height of a desired area of interest centered around the one or more pixels representing the object being tracked, according to an embodiment of the present invention;
  • FIG. 30 depicts schematically a plurality of images comprising the desired area of interest centered around the pixel representation of the object being tracked being extracted from a plurality of video codestreams from a plurality of cameras arranged in a matrix of overlapping fields of view, according to an embodiment of the present invention
  • FIG. 31 depicts that a plurality of images comprising of the desired area of interest centered around the pixel representation of the object being tracked are extracted from a plurality of video codestreams from a plurality of cameras arranged in a matrix of overlapping fields of view and then converted into a video codestream in a known format and encoding, according to an embodiment of the present invention
  • FIG. 32 depicts schematically a plurality of images comprising the desired area of interest centered around the pixel representing the object being tracked being extracted from a plurality of video codestreams from a plurality of cameras arranged in a matrix of overlapping fields of view and converted into a video codestream in a known format and encoding and transmitted over a LAN, WAN or Internet to a computing device for decoding, viewing or image processing and analysis or display on a display device, according to an embodiment of the present invention;
  • FIG. 33 depicts a flowchart of a method of delivering a video of an area of interest centered on the physical position of an object being tracked live, according to an embodiment of the present invention
  • FIG. 34 depicts a flowchart of a method of delivering a video of an area of interest centered on the physical position of an object being tracked from an existing archive for a requested time window, according to an embodiment of the present invention
  • FIG. 35 depicts an example of a computer system for implementing one or more of the methods and systems of delivering or viewing a video of an area of interest centered on a physical location of an object being tracked, according to an embodiment of the present invention
  • FIGS. 36A-36D depict an example of providing a video of an object of interest on-demand based on a selection of the object by a user, according to an embodiment of the present invention
  • FIG. 37 depicts another example of providing a video of an object of interest on-demand based on a selection of the object by a user, according to another embodiment of the present invention.
  • FIG. 38 depicts another example of providing a video of an object of interest on-demand based on a selection of the object by a user, according to another embodiment of the present invention.
  • FIGS. 39A-39D depict an example of a display device (e.g., a smart device) for viewing a custom on-demand video of an object of interest, according to an embodiment of the present invention
  • FIGS. 40A-40D depict an example of interaction of a user on graphical user interface (GUI) providing functionality to access a custom video, according to an embodiment of the present invention.
  • GUI graphical user interface
  • FIG. 41 depicts another configuration of the GUI that is displayed a display device (e.g., a smart device) of a user, according to an embodiment of the present invention.
  • a display device e.g., a smart device
  • FIG. 1 depicts schematically a moving object being tracked by a tracking device attached to the object and a transmitter attached to the object transmitting tracking information, according to an embodiment of the present invention.
  • an object O m static or in movement (in motion) can be tracked using a tracking device or a locator beacon L b .
  • a locator beacon L b is an electronic device that uses one or more sources of electromagnetic frequency.
  • locator beacon L b can be a detectable pattern or visual reference.
  • An example of a locator beacon L b can be a Global Positioning System (GPS) unit, an Inertial Measurement Unit (IMU), a BLUETOOTH device, a laser, an optical sensor, an image pattern or visual reference (a discernable color), a marking, or any combination thereof.
  • GPS Global Positioning System
  • IMU Inertial Measurement Unit
  • BLUETOOTH BLUETOOTH device
  • the locator beacon L b generates or helps generate or compute the absolute position OP i of object O m in three dimensions (3D) at a given instant in time Ti across time T.
  • the physical location or position of object O m in 3D space can be written as OP i (X i ,Y i ,Z i ).
  • X i can be longitude
  • Y i can be latitude
  • Z i can be elevation.
  • a reference point OP R (X R , Y R , Z R ) can be provided for the location OP i .
  • the reference point OP R has coordinates (X R , Y R , Z R ) in the 3D space.
  • a reference point for longitude, latitude and elevation can be (0, 0, sea-level).
  • the locator beacon L b can generate or assist in the generation of a stream of ‘n’ position points OP i (X i ,Y i ,Z i ) over time T.
  • Object (O m ) can be static (immobile) or can be in motion.
  • the object O m can be a shoe, a car, a projectile, a baseball, an American football, a soccer ball, a helmet, a horse, a ski or ski-boot, a skating shoe, etc.
  • the object can be an aquatic, a land-based, or an airborne lifeform.
  • the object can also be a manmade item that is tagged with the locator beacon.
  • the Object O m may also be provided with a wired or wireless transmission device D xmit .
  • the device D xmit can be configured to transmit a data stream from locator device L b to a receiver using some known protocol, e.g. HTTP, TCP or HTTPS.
  • FIG. 2 depicts schematically the transmitter attached to the object transmitting the tracking information from locator device L b as a data stream to a receiver device, according to an embodiment of the present invention.
  • the data stream or collection of readings captured by locator beacon L b is transmitted by transmission device (transmitter) D xmit and received by receiver device (receiver) D rcv .
  • the receiver device (receiver) D rcv can be a wired device, a wireless device, or a cellular device.
  • FIG. 3 depicts schematically the receiver device D rcv connected to a computer system C o that receives the data stream and stores it on a storage device S, according to an embodiment of the present invention.
  • the receiver D rcv sends the data to a computer system C o which in turn stores the received data on to a storage device S (e.g., a hard-drive, a network attached storage (NAS) device, a storage area network (SAN)).
  • the data includes the data from locator device L b , identifier of locator device L b , metadata, the physical location or position of object O m in 3D space OP i (X i ,Y i ,Z i ), derived data, or any related data.
  • FIG. 4 depicts schematically a video camera, according to an embodiment of the present invention.
  • FIG. 5 depicts schematically the video camera collecting an image at an instant in time, according to an embodiment of the present invention.
  • the camera C a e.g., a conventional video camera depicted in FIG. 4 captures a plurality of temporally sequential images I Ca .
  • the camera image comprises a plurality of pixels C Pi (X, Y), where i is an integer number referring to a specific pixel i and x and y referring to the position or coordinates of the pixel i.
  • the camera C a captures a video with relatively large image size such as, for example, a 4K or 8K, or greater.
  • FIG. 6 depicts schematically the video camera collecting an image at an instant in time where the camera is static (fixed in position) and collecting the image at an incident angle, and the image having a perspective view of object(s) being captured in the image, according to an embodiment of the present invention.
  • the resulting image coverage is trapezoidal where the objects closer to the camera appear larger and those farther from the camera appear smaller.
  • FIG. 7 depicts schematically the video camera collecting an image at an instant in time.
  • the camera is static and collects the image looking down on the object(s), and the image is a top-view of the field of view, according to an embodiment of the present invention.
  • the position of the camera is looking down from an elevated position onto a field of view containing objects.
  • FIG. 8 depicts schematically the video camera having sensors that provide a position of the camera, for example, a GPS sensor, an IMU sensor and/or any other type of sensors such as visual sensor, according to an embodiment of the present invention.
  • camera C a can also be equipped with an optional GPS, IMU or other positioning sensors.
  • the orientation, pan, tilt and zoom values of camera C a remain constant.
  • the absolute position of the camera can be used to link the position of each pixel in the image captured by the camera to the physical space or field of view being captured by the camera C a .
  • FIG. 9 depicts schematically an object being tracked and captured in an image at a location at an instant in time, in the field of view of the camera C a with a known location and orientation, the pixel position of the object in the image at that instant in time can be computed from the physical location of the object, according to an embodiment of the present invention. As shown in FIG.
  • a tracked point OP i (X i ,Y i ,Z i ) in 3D space at time Ti, that falls within the field of view of the camera C a has a corresponding projected point C Pi (X,Y) within an image captured by the camera C a at about the same time, OP i (X i ,Y i ,Z i ) ⁇ C Pi (X, Y).
  • FIG. 10 depicts a conversion from the physical location of the object, within the field of view of the camera, being tracked or captured, to the pixel location of the object within an image captured by the camera at substantially the same instant in time, according to an embodiment of the present invention. As shown in FIG.
  • the computation for image distortion and subsequent orthographic adjustment can also be performed if desired by the user.
  • This data is computed by computer C o and stored on storage S. In an embodiment, conventional processing and image tracking methods can be used to accomplish this conversion.
  • FIG. 11 depicts schematically a video camera with sensors that determine the position of the camera such as, a GPS sensor, an IMU sensor or other types of sensors, according to an embodiment of the present invention.
  • FIG. 11 depicts a camera C a that may be identical to the one shown in FIG. 8 .
  • FIG. 12 depicts schematically a matrix of a plurality of cameras such that the field of view of one camera substantially overlaps the field of view of an adjacent camera by a known amount, according to an embodiment of the present invention.
  • An array of such cameras can be arranged in a matrix configuration comprising of rows 1 to r and columns 1 to c where C a(i,j) is one camera in the matrix.
  • cameras C a(i+1,j) and Ca (i,j+1) are adjacent cameras to C a(i,j) .
  • the indices i and j represent the row and column numbers in the matrix of cameras, respectively.
  • the cameras can be mounted in such a way that the field of view of one camera C a(i,j) overlaps (has at least some overlap) with the field of view of an adjacent camera.
  • the quantity of overlap can be configured and selected as needed.
  • the cameras can have an overlap such that at least 50% of the pixels in the horizontal direction for one camera are substantially identical to at least 50% of the pixels in the horizontal direction for an adjacent camera to the left or right of the one camera.
  • the cameras can have an overlap such that at least 50% of the pixels in the vertical direction for one camera are substantially identical to at least 50% of the pixels in the vertical direction for an adjacent camera to the top or bottom of the one camera.
  • FIG. 13 depicts an image I Ca captured by one camera in the matrix of cameras, the image having a pixel width W and a pixel height H, according to an embodiment of the present invention.
  • the image I Ca as captured by such camera C a has pixel width W and pixel height H.
  • the pixel width W and pixel height H of the image captured by the camera can be selected as desired by the user, for example, using one of the settings of the camera.
  • FIG. 14 depicts an image I D selected for display on a display device D, where the pixel width of displayed image I D is D W pixels and the pixel height of image I D is D H pixels, according to an embodiment of the present invention.
  • the displayed image I D is a portion of captured image I Ca captured by the camera such that pixel width W of captured image I Ca is greater than the pixel width D W of displayed image I D (W>D W ) and pixel height H of captured image I Ca is greater than pixel height D H of displayed image I D (H>D H .
  • the term “displayed” is used in this example to indicate an image displayed on a display device such as a computer screen, as it must be appreciated that the term “displayed image” can also encompass an image that is transformed or transmitted or otherwise processed and is not limited to only displaying the image on a display device.
  • FIG. 15 depicts schematically an example of image I D and image I Ca such that W/2 ⁇ D W and H/2 ⁇ D H .
  • the pixel width and pixel height of the displayed area of interest from the camera image is at least half the pixel width and pixel height of the image captured by the camera, according to an embodiment of the present invention.
  • the image I D is a sub-image of I Ca where I D is an area of interest within I Ca .
  • W is 3840
  • H is 2160
  • D W is 1920 or smaller
  • D H is 1080 or smaller.
  • a display device D does not need to be a computer monitor or computer screen.
  • Display D can be a window that has the following dimensions, D W is 960 pixels and D H is 540 pixels.
  • the cameras when using a matrix of cameras as shown in FIG. 12 , the cameras are synchronized with each other.
  • a first camera covers a first field of view and a second camera adjacent to the first camera covers a second field of view to the left or to the right of the first field of view.
  • the first and second cameras are temporally synchronized, for example by using a ‘genlock’ signal. This implies that the first camera captures a first image at a first instant in time, and the second camera captures a second image at a second instant in time and the first instant in time and the second instant in time are substantially the same.
  • FIG. 16 depicts schematically an example of images captured by two adjacent cameras (e.g., the first camera and the second camera), each camera generating an image of pixel width W and pixel height H, with a horizontal overlap between the images of O W pixels where O W ⁇ D W , according to an embodiment of the present invention.
  • the first image captured by the first camera and the second image captured by the second camera overlap with a certain amount overlap O W .
  • the first image has pixels in the horizontal direction, along the width of the first image that are substantially identical to the pixels in the horizontal direction, along the width of the second image. This is referred to as a pixel overlap or image overlap or overlapping image.
  • the overlap is O W pixels wide.
  • the overlap O W is greater than or equal to the width D W of the display device or window shown displayed on the display device (i.e., O W ⁇ D W ).
  • FIG. 17 depicts schematically an example of two images generated by two adjacent cameras, each camera generating an image of pixel width W and pixel height H, with a horizontal overlap between the two images of O W pixels where O W ⁇ D W , according to an embodiment of the present invention.
  • FIG. 17 further depicts four examples of an area of interest D 1 , D 2 , D 3 and D 4 such that each area of interest is wholly contained within the image captured by at least one camera, as a result of the overlap, according to an embodiment of the present invention. As shown in FIG.
  • area of interest (AOI) D 1 is contained within a first half of the first image I Ca(0,0)
  • AOI D 2 is contained in both the second half of the first image as well as first half of the second image in the overlap area O W
  • AOI D 3 is contained within the second half of the second image, I Ca(0,1) .
  • a portion of AOI D 4 is contained within the second half of the second image and another portion of AOI D 4 is contained within the first half of the second image and the second half of the first image, in the overlap area O W .
  • a first camera may cover a first field of view and a second camera may cover a second field of view above or below the first field of view of the first camera.
  • the first and second cameras are temporally synchronized, for example by using a ‘genlock’ signal. This implies that the first camera captures a first image at a first instant in time, and the second camera captures a second image at a second instant in time and the first instant in time and the second instant in time are substantially the same.
  • FIG. 18 depicts schematically an example of two images from two adjacent cameras (e.g., the first camera and the second camera), each camera generating an image of pixel width W and pixel height H, with a vertical overlap of O H pixels where O H ⁇ D H , according to an embodiment of the present invention.
  • a first image captured by the first camera and a second image captured by the second camera overlap with a certain amount O H .
  • the first image has pixels in the vertical direction, along the height of the first image that are substantially identical to the pixels in the vertical direction, along the height of the second image. This is referred to as a pixel overlap or image overlap or overlapping image.
  • the overlap is O H pixels wide.
  • the overlap O H is greater than or equal to the height D H of the display device or window (O H ⁇ D H ).
  • FIG. 19 depicts an example of two adjacent first and second images from first and second cameras, each generating an image of pixel width W and pixel height H, with a vertical overlap of O H pixels where O H ⁇ D H , and depicts 4 examples of an areas of interest D 1 , D 2 , D 3 and D 4 such that each area of interest is wholly contained within the image captured by at least one camera, as a result of the overlap, according to an embodiment of the present invention.
  • the area of interest D 1 , D 2 , D 3 or D 4 is such that O H ⁇ D H
  • the area of interest D 1 , D 2 , D 3 or D 4 is wholly contained within at least the image I Ca of one camera.
  • FIG. 19 depicts an example of two adjacent first and second images from first and second cameras, each generating an image of pixel width W and pixel height H, with a vertical overlap of O H pixels where O H ⁇ D H , and depicts 4 examples of an areas of interest D 1 , D 2 , D 3 and D
  • area of interest (AOI) D 1 is contained within a first half of the first image I Ca(0,0)
  • AOI D 2 is contained in both the second half of the first image as well as first half of the second image in the overlap area O H
  • AOI D 3 is contained within the second half of the second image, I Ca(1,0) .
  • a portion of AOI D 4 is contained within the second half of the second image and another portion of AOI D 4 is contained within the first half of the second image and the second half of the first image, in the overlap area O H .
  • FIG. 20 depicts schematically images generated by a matrix of a plurality of cameras where the cameras are configured such that the pixel data in one image generated by one camera is substantially identical to the pixel data of one image generated by an adjacent camera due to overlapping fields of view of one camera with the adjacent camera, according to an embodiment of the present invention.
  • FIG. 20 depicts the matrix of cameras of r-rows and c-columns, arranged to be adjacent to each other with overlapping fields of view, such that the images generated by one camera overlaps with the images generated by an adjacent camera in the horizontal plane and/or in the vertical plane.
  • the images generated by the plurality of adjacent cameras (or matrix of cameras) form a larger aggregated image having a width W full and a height H full .
  • the notional aggregated image is simply logical in nature as it is not physically generated by stitching images from each camera together to form a larger image.
  • the row and column values or indices in the matrix of images from the plurality of cameras can be used for further computation.
  • FIG. 21 depicts schematically a pixel in an aggregated image generated from the matrix of cameras mathematically maps to a pixel within one of the constituent images from one or more of the cameras, according to an embodiment of the present invention.
  • pixel C Pi (X full ,Y full ) in the resulting notional aggregated image maps to an actual pixel C Pi (X,Y) which belongs to an image within a specific camera. The position of this pixel can be derived mathematically.
  • FIG. 22 depicts schematically one image captured at one instant in time by a camera, resulting in the capture of a plurality of temporally sequential images captured over time at a known frame rate, according to an embodiment of the present invention.
  • a single image I Ca is captured by a camera at a given instant in time T i .
  • the plurality of temporally sequential images are captured by one camera at a known frame rate Hz (frame/second). If all cameras are synchronized, all cameras generate such data at substantially the same rate Hz and substantially same instants in time.
  • FIG. 23 depicts a conversion of a plurality of temporally sequential images I Ca captured by one camera into a video stream V Ca in a known format and encoding, according to an embodiment of the present invention.
  • each image I Ca captured by one camera C a can be encoded inside or outside the camera into a video stream V Ca . Therefore, video stream V Ca can be delivered from the camera at a known frame rate (Hz) and in a known compressed or uncompressed format.
  • the sequence of images, each image being I Ca is converted to video stream V Ca .
  • the video stream V Ca can be stored in primary or secondary computer memory or stored in a storage device for further processing or transmission.
  • the video stream V Ca can, for example, be displayed on a display device.
  • FIG. 24 depicts schematically a video stream V Ca in a known format and encoding being decoded into a plurality of temporally sequential images I Ca(decoded) that are substantially identical to the images originally captured by the camera and converted into the video stream V Ca , according to an embodiment of the present invention.
  • video V Ca can be decoded by another device into a plurality of temporally sequential images, each image I Ca(decoded) being substantially similar to the corresponding original image I Ca used to create V Ca .
  • Video V Ca is delivered from a camera at a known delivery or capture frame rate (in Hz) and in a known compressed or uncompressed format.
  • the sequence of images in the Video are decoded from the video at a known decoding frame rate.
  • the decoding frame rate can be the same or different from the delivery or capture rate of the video.
  • FIG. 25 depicts schematically a desired area of interest or desired region D extracted from a plurality of decoded temporally sequential images I Ca(decoded) captured over time T, according to an embodiment of the present invention.
  • FIG. 25 depicts that a desired area of interest I D can be extracted from an image I Ca(decoded) .
  • a plurality of areas of interest similar to I D can be extracted from a plurality of images similar to I Ca(decoded) .
  • FIG. 26 depicts schematically a plurality of images representing the desired area of interest or desired region D extracted from a plurality of decoded temporally sequential images I Ca(decoded) captured over time T being converted into a video codestream V ID of a known format and encoding, according to an embodiment of the present invention.
  • the extracted plurality of areas of interest images, each image being an area of interest image I D from I Ca(decoded) can then be encoded to a video codestream V ID of known format.
  • FIG. 27 depicts schematically a plurality of cameras arranged in a matrix configuration such that the field of view of one camera has at least some overlap with the field of view of an adjacent camera, and each camera generates a video containing some pixels in the video from one camera that are substantially identical to some pixels in the video from an adjacent camera because of overlapping fields of view, according to an embodiment of the present invention.
  • the plurality of cameras can be configured to overlap by a width of O W pixels and a height of O H pixels, where, for example, O W is at least W/2 (i.e., greater than or equal to half the width W of an image captured by one camera) and O H is at least H/2 (i.e., greater than or equal to half the height H of an image captured by one camera).
  • the plurality of cameras can be synchronized with each other such that the instant in time when an image is captured by one camera is substantially the same as the instant in time when an image is captured by all other cameras.
  • the cameras can also be color calibrated to substantially match each other's color space.
  • all cameras are configured to generate video streams, depicted as V Ca1 , V Ca2 , V Ca3 , and so on.
  • a video matrix of overlapping video streams can be obtained from the matrix of cameras which capture overlapping images/videos.
  • FIG. 28 depicts schematically an object being tracked within the field of view of at least one camera, at any given instant in time, and the physical location of the object can be represented by one or more pixels within the video generated by the at least one camera, according to an embodiment of the present invention.
  • any point OP i (X i ,Y i ,Z i ) in the physical space, that is within the field of view of a camera at any given instant in time T i can be transformed into pixel space C Pi (X,Y) for a specific camera. Since the camera can be identified, the video that is generated by the camera, and the instant in time within that video can be found using any conventional method.
  • point OP i (X i ,Y i ,Z i ) is the location in the physical space of an object O m being tracked, then the pixel location of object O m can be projected to any pixel C Pi (X,Y) with a video stream being captured by a camera that is in turn part of the camera matrix, provided OP i (X i ,Y i ,Z i ) is within the field of view of at least one camera within the camera matrix.
  • a point OP i (X i , Y i , Z i ) that depicts the approximate location of the object O m being tracked can be generated at each instant in time T i when an image was captured by the camera.
  • a moving object with coordinates OP i (X i , Y i , Z i ) may traverse one or more overlapping cameras.
  • Each dot shown in FIG. 28 represents a position of the object at specific instant in time.
  • FIG. 29 depicts schematically an object being tracked within the field of view of at least one camera, at any given instant in time, and the physical location of the object can be accurately represented by one or more pixels within the video generated by the camera, and depicts a pixel width and a pixel height of an overlapping region greater than or equal to the pixel width and pixel height of a desired area of interest centered around the one or more pixels representing the object being tracked, according to an embodiment of the present invention.
  • time T moves in the forward direction
  • a moving object may traverse one or more overlapping cameras.
  • Each dot in FIG. 29 represents a position of the object captured by at least one camera at a certain point in time. As shown in FIG.
  • each tracked point OP i in the physical space, measured or interpolated can be mapped to a pixel position CP i of that point from an image frame I Ca(decoded) from video V Ca captured by a camera C a .
  • An area of interest image I D that is substantially centered on pixel position C Pi can then be selected (i.e., around the location of the object of interest).
  • the constraint on the pixel width D W and pixel height D H of area of interest I D is that D W ⁇ O W and D H ⁇ O H , where O W is the amount of overlap in camera video pixels in the horizontal direction and O H is the amount of overlap in camera video pixels in the vertical direction.
  • Each area of interest I D for each tracked point in the plurality of tracked points can be computed to exist in its entirety within the video generated by at least one camera. If, for example, the area of interest is located with the overlap region, then the area of interest is within both an image captured by a first camera in the plurality of cameras and another image captured by a second camera in the plurality of cameras.
  • FIG. 30 depicts schematically a plurality of images comprising the desired area of interest centered around the pixel representation of the object being tracked being extracted from a plurality of video codestreams from a plurality of cameras arranged in a matrix of overlapping fields of views, according to an embodiment of the present invention.
  • a plurality of area of interest images I D can be extracted from videos captured by one or more cameras.
  • the images I D corresponding to the area of interest that are extracted from the videos V Ca1 , V Ca2 , etc. can be arranged in sequence or encoded to form a video (e.g., a time-lapse video), as described in the following paragraph.
  • FIG. 31 depicts a plurality of images comprising the desired area of interest centered around the pixel representation of the object being tracked are extracted from a plurality of video codestreams from a plurality of cameras arranged in a matrix of overlapping fields of view and then converted into a video codestream in a known format and encoding, according to an embodiment of the present invention.
  • a plurality of area of interest images each image being similar to I D , can be encoded into a video codestream V ID of a known format and encoding.
  • FIG. 32 depicts schematically a plurality of images comprising the desired area of interest centered around the pixel representing the object being tracked being extracted from a plurality of video codestreams from a plurality of cameras arranged in a matrix of overlapping fields of view and converted into a video codestream in a known format and encoding and transmitted over a LAN, WAN or Internet to a computing device for decoding, viewing or image processing and analysis or display on a display device, according to an embodiment of the present invention. As shown in FIG.
  • a first video codestream V ID of a known format and encoding can be transmitted to a first consumer application such as an APPLE iPad® application or an ANDROID application or a desktop computer software application or a client software program, where the video V ID is generated to follow a first object O m and the first object O m is selected for being followed by the first consumer application.
  • a second video codestream V ID of a known format and encoding can be transmitted to a second consumer application such as an APPLE iPad® application or ANDROID application or a desktop computer software application or a client software program, where the video V ID is generated to follow a second object O m and the second object O m is selected for being followed by the second consumer application.
  • the second object can be the same or different from the first object.
  • FIG. 33 depicts a flowchart of a method of delivering a video of an area of interest centered on the physical position of an object being tracked, according to an embodiment of the present invention.
  • a client requests for a live video that follows an object O m where the pixel requested dimensions of the video are D W pixel wide by D H pixels high.
  • a user selects an object O m to be followed and viewed for display or processing D, a user being a human or a computer software program or other interface element or device.
  • client software sends the identity of O m and the specification of D, for example width D W and height D H (e.g., display window dimensions).
  • the user may send the dimensions of a screen of tablet screen or a screen of smart phone screen or a window within on a screen of personal computer, or television (TV) screen or a window or portion of the TV screen, etc.
  • the client sends this request over LAN, WAN or the Internet and may use the HTTP or secure HTTP (HTTPS) or any other computer network protocol such as TCP, ATM, etc.
  • HTTPS HTTPS
  • server software receives the request.
  • the server may choose to exit upon error and report an exception with an error message, if an error occurs.
  • the server validates the request from the client software.
  • the validation includes determining that object O m is indeed within the field of view of at least one camera within the matrix of cameras and that the area of interest D W and D H dimensions (for example, corresponding to the display dimensions) follow the constraints on the overlapping pixel width O W and overlapping pixel height O H for each video, for each camera. That is D W ⁇ O W and D H ⁇ O H , where O W is the amount of overlap in camera video pixels in the horizontal direction and O H is the amount of overlap in camera video pixels in the vertical direction).
  • the cameras are static and do not pan, tilt or zoom.
  • the cameras are color-matched and synchronized.
  • the cameras are arranged in a matrix such that there is sufficient overlap of the field of view of the cameras so that the overlap in pixels of the captured images or videos is at least O W pixels wide and O H pixels tall.
  • the server finds data, at S 1180 , from the locator beacon L b for object O m .
  • the server searches for and finds the most recent data from locator L b .
  • the server finds or computes physical object location OP i at S 1200 .
  • the server uses data from the locator to compute the camera Ca within the matrix of cameras, that wholly contains an area of interest that is D W pixels wide by D H pixels tall and centered substantially at pixel coordinate or position C Pi .
  • CP i is the pixel coordinate in an image, at a specific instant in time, from the video captured by the camera Ca corresponding to physical object location OP i of object O m captured by camera Ca.
  • the server transforms the physical position OP i (X i , Y i , Z i ) obtained from the locater beacon data L b into pixel position C Pi (X i ,Y i ) within an image for a specific camera such that an area of an image of size D W ⁇ D H which is substantially centered around position CP i is wholly contained within the camera image.
  • an image at the specific instant in time is decoded from the selected video from the selected camera.
  • the video is live.
  • the area of interest that is D W pixels wide by D H pixels tall is extracted from the decoded image.
  • the object O m is substantially at the center of the decoded image. If further image processing is required to correct the image for distortions, it is performed at S 1280 .
  • the area of interest image that is D W pixels wide by D H pixels tall is then encoded into the video codestream V ID .
  • the video codestream is for example stored in a computer memory buffer, at S 1300 .
  • the computer memory buffer has sufficient data for transmission, the computer transmits the data to the client that made the request.
  • the method repeats the searching of the storage for location physical object location OP i at S 1200 to get the next physical or interpolated location of OP i from locator beacon data L b that was computed by C o and stored on storage device S and repeats the next steps at S 1220 , S 1240 , S 1260 , S 1280 , S 1300 and S 1300 , as needed, until the client cancels the requests or ends the processing or closes the connection, at S 1340 . In case the client closes the connection or cancels the request or ends the processing, the method ends at S 1340 .
  • FIG. 34 depicts a flowchart of a method of delivering a video of an area of interest centered on the physical position of an object being tracked from an existing archive for a requested time window, according to an embodiment of the present invention.
  • the method starts when a client requests for an existing archived video that follows an object O m where the pixel requested dimensions of the video are D W pixel wide by D H pixels tall.
  • a user selects an object O m to be followed and viewed for display or processing D.
  • the user can be a human or a computer software program or other interface element.
  • the user also selects a time window T in to T out between which the user wishes to follow the object, where T in ⁇ T out .
  • client software sends the identity of O m and the specification of D, for example D W and D H and the time window T in to T out .
  • the client may send this request over a local area network (LAN), a wide area network (WAN) or the Internet and may use the HTTP or secure HTTP (HTTPS) or any other computer network protocol such as TCP, ATM, etc.
  • LAN local area network
  • WAN wide area network
  • HTTPS HTTPS
  • the server software receives the request. The server may choose to exit is an error occurs and report an exception with an error message.
  • the server validates the request from the client software.
  • Validation includes verifying that there is content for the specific time window, and/or determining whether O m is indeed within the field of view of at least one camera within the plurality of cameras and determining whether the area of interest dimensions D W and D H is contained within the overlapping pixel width O W and within an overlapping pixel height O H for each video from each camera.
  • the cameras are static and do not pan, tilt or zoom.
  • the cameras are color-matched and/or synchronized. The cameras are arranged in a matrix such that there is sufficient overlap between the field of view of the cameras so that the overlap in pixels of the resulting video is at least O W pixels wide and O H pixels tall.
  • the server finds data from the locator beacon L b for object O m .
  • the server enters a loop for each time instant T i starting from T in to T out in steps of dT where dT is 1/Hz (Hz is the video camera capture frame rate).
  • the server searches storage device S for and finds the data from locator beacon L b at time T i .
  • the server finds or computes the physical location OP i of object O m .
  • the server computes the camera C a within the matrix of cameras, that wholly contains an area of interest that is D W pixels wide by D H pixels tall and centered substantially around pixel position C Pi in the image of video containing the captured object O m that corresponds to physical OP i of object O m .
  • CP i is the pixel coordinate in an image, at a specific instant in time, from the video captured by the camera C a corresponding to physical object location OP i of object O m captured by camera C a .
  • an image at the specific instant in time is decoded from the selected video from the selected camera.
  • the video is archived on a storage device S from a prior capture.
  • the area of interest that is D W pixels wide by D H pixels tall is extracted from the decoded image.
  • CP i is substantially at the center of this image.
  • object O m is substantially captured at the center of the image. If further image processing may be needed to correct the image for distortions, it is performed at S 2300 .
  • the area of interest image that is D W pixels wide by D H pixels tall is then encoded into the video codestream V ID and stored, for example, in a computer memory buffer, at S 2320 .
  • the server transmits the data containing the video codestream V ID in the memory buffer to the client that made the request.
  • the method goes to S 2200 and repeats the procedures S 220 , S 2240 and S 2260 , as needed, to get the next physical or interpolated location of OP i from location L b at the next time T i , which is computed by server C o and stored on storage S. If and/or when T i reaches time T out or the client closes the connection or cancels the request, the method ends at S 2360 .
  • FIGS. 36A-36D depict an example of providing a video of an object of interest on-demand based on a selection of the object by a user, according to an embodiment of the present invention.
  • a plurality of cameras can be distributed around the race track (e.g., NASCAR race track) 12 shown in FIG. 36A .
  • the cameras are static (do not tilt, do not pan and do not zoom) and thus each camera has one orientation and captures a specific field of view.
  • the plurality of cameras can be arranged so as to cover the entire track 12 . As shown in FIG.
  • adjacent cameras 20 A and 20 B have overlapping field of view (FOV) 21 A and 21 B so as to capture images or videos containing objects of interest in overlapping region 21 C, for example.
  • Each camera 20 A and 20 B will capture a series of sequential images or frames within its specific field of view 21 A and 21 B, respectively.
  • the overlap region 21 C can provide a smooth transition from the sequence of frames captured by the camera 20 A to the sequence of images or frames captured by the camera 20 B during the transition of the race car 14 from the field of view 21 A of camera 20 A to the field of view of camera 20 B in the race track 12 .
  • a user can employ a viewing device such as a tablet 16 or a smartphone 18 , etc.
  • a first user may be following and viewing selected race car 14 A on display device (e.g., the tablet) 16 while a second user may be following and viewing a different selected race car 14 B on display device (e.g., the smartphone) 18 .
  • Each user can select a car of interest as desired by activating or pointing on the car of interest using a pointer device or the like (or a finger touch on a touch sensitive device such as a tablet, for example).
  • a pointer device or the like or a finger touch on a touch sensitive device such as a tablet, for example.
  • the users can view completely distinct sets of images or videos. While a first user with display device 16 is viewing race car 14 A racing in the race track 12 , a second user has just witnessed a collision between race car 14 B and other race cars and a track barrier in the race track 12 .
  • each user is viewing a different set of images or videos, other users may select the same race car and thus be able to follow the same race car (object) and thus view the same set of images.
  • a race car that will be within the same field of view as another car in the race track (for example, two cars close to each other or passing each other) and thus the user will be able to view in at least a period of time images or videos containing both the race car of interest to one user and the race car of interest to another user.
  • a car race e.g. NASCAR, FORMULA 1
  • the same can be applied to a derby or horse race, a boat race, a motorbike race, a bicycle race, or a marathon race or other track run, a ski race, a skating race, etc.
  • FIG. 37 depicts another example of providing a video of an object of interest on-demand based on a selection of the object by a user, according to another embodiment of the present invention.
  • video tracking of a suspect car by a law enforcement agency can be accomplished by using the method described herein.
  • the system upon selection by a law enforcement officer of a car of interest or person of interest, the system is able to deliver a video showing and following the car or person of interest on a display device of the law enforcement officer without the law enforcement officer switching between a plurality of display devices or manually switching between cameras as is performed by conventional methods and systems.
  • FIG. 37 depicts a geographical map 30 of a city showing a path 31 taken by a car of interest 32 tracked by the law enforcement agency.
  • the law enforcement officer can track the car of interest 32 on his display device (e.g., a tablet, a computer screen or the like) along the path 31 taken by the car of interest 32 .
  • his display device e.g., a tablet, a computer screen or the like
  • the system and method described herein is able to determine a camera within the plurality of cameras that wholly contains the car of interest (tracked by the law enforcement officer) using the physical location of the car (acquired from the locator beacon such as a GPS device on the car) and extract the area of interest corresponding to the physical location of car of interest, as shown in images 34 A-E.
  • the system and the method can deliver images or a video of the car of interest 32 while the car 32 is driving in the city.
  • the system is able to “switch” from images captured by one camera to an adjacent camera seamlessly to provide a continuous video showing the car 32 driving in the streets of the city and thus allowing the law enforcement officer to assess the position of the car 32 on the map 30 . If a camera is not operating properly or there is a “blind area” or a field of view not captured by any of the cameras in the city, the system can switch or revert back to a map mode to display a portion 30 A of the map 30 or the position of the car of interest on the portion 30 B of the map 30 , depending on the location of the car 32 , as shown in images 35 A-B.
  • FIG. 38 depicts another example of providing a video of an object of interest on-demand based on a selection of the object by a user, according to another embodiment of the present invention.
  • FIG. 38 shows a virtual football field 40 for fantasy team gameplay.
  • a plurality of cameras (not shown) are arranged in a plurality of football stadiums (e.g., NFL stadium). Each camera in the plurality of cameras captures images within a specific field of view in a specific football stadium.
  • the field of views of adjacent cameras in any football stadium can be arranged so as to overlap to provide an overlap area or zone.
  • the various videos from the different cameras in each stadium can be stored in a storage device and, as described above, the position of a player on the football stadium is linked to an image of the football player stored in the video captured by one or more of the plurality of cameras. This is performed for each player and for each game in each of the plurality of football stadiums.
  • a user or gamer is able to select a fantasy team by assembling players from real teams. When a fantasy player in a gamer's fantasy team scores a “down” (e.g., a touchdown, first down, etc.), as shown in images 41 and 42 , a video snippet can be played to the gamer showing the fantasy player scoring a touchdown or at a first down.
  • a “down” e.g., a touchdown, first down, etc.
  • the video snippet is extracted from the images in the videos stored in the storage device in accordance to the position of the fantasy player on the fantasy field.
  • the video snippet is a clip of video of finite duration.
  • This video snippet can be extracted using the methods described in the above paragraphs because the plurality of cameras capturing the video have a known overlap of each of their FOV and the cameras are arranged to ensure an effective matrix configuration of the resulting content captured by the cameras. Therefore, as it can be appreciated from the above, various football games (NFL, or school games, etc.) can be made accessible at all times within a single application.
  • Various videos can be delivered on-demand to fans with the ability for each fan to select a specific player in order to view a video of the player in action.
  • the term “virtual camera” is used herein to indicate that the camera is not real and the user is able to control the virtual camera as if it is a real moving camera (a camera that is able to zoom in/out, tilt and/or pan) to follow an object of interest to the user.
  • This also provides an alternative streaming solution for a plurality (millions to hundreds of millions) of smart devices (tablets and smart phones) in the market. In addition, this allows casting alternative video streaming from the smart devices to televisions independent of traditional cable television subscriptions.
  • This system also enables “speedwatch” functionality to fans to allow the fans to watch only their subjects of interests and on-demand.
  • FIGS. 39A-39D depict an example of a display device (e.g., a smart device) for viewing a custom on-demand video of an object of interest, according to an embodiment of the present invention.
  • a display device e.g., a smart device
  • credential information may be included with the merchandize to enable the fan to access a “StreamPlay” service to allow the fan to watch or view a custom video of a player of the licensed team.
  • the fan In order to access the “StreamPlay” service, the fan first downloads or opens a software application (APP) on a smart device, for example, as shown in FIG. 39A .
  • APP software application
  • the fan validates the team that is associated with the merchandize that the fan purchased and loads the APP on the smart device, as shown in FIG. 39B .
  • team specific highlights can be played to the fan to attract or retain the attention of the fan while the APP loads.
  • Various games played by the team associated with the purchased merchandize and validated by the fan are listed to the fan on the smart device.
  • the fan selects a specific game to watch, as shown in FIG. 39C .
  • the user or fan is also provided with a graphical user interface (GUI) and functionality to control or customize his viewing experience, as shown in FIG. 39D .
  • GUI graphical user interface
  • the game begins in portrait mode while providing the option to switch to landscape mode if desired by pressing on a button in the GUI.
  • the switching of the viewing mode from portrait to landscape can simply be accomplished by turning the smart device by 90 deg., as is generally performed in any smart device.
  • FIGS. 40A-40D depict an example of interaction of a user on graphical user interface (GUI) providing functionality to access a custom video, according to an embodiment of the present invention.
  • GUI graphical user interface
  • a plurality of fixed-point cameras are mounted in various stadiums to capture videos from their specific field of views.
  • the captured videos are transmitted to a video server.
  • the videos can be stitched together or provided with some overlap to allow a smooth transition from a view captured by one camera to a view captured by an adjacent camera.
  • digital audio of play-by-play of the game home team and away team
  • an audio server which can be the same or different from the video server.
  • Game metadata is also generated during the live transmission of the video and audio data to the video server and audio server.
  • the game metadata is detailed information about the game after completion of the game.
  • individual player statistics and game statistics are included in the game metadata.
  • additional information can also be added by subject matter experts such as commentators.
  • trivia information and relationships of a specific aspect of the specific portion of a specific game in relation to similar situations in past games may also be included in the game metadata.
  • play-by-play information is added by the game administrators or game service providers (for example, NFL). This information is added in the form of game metadata.
  • Such game metadata is presented during the recorded broadcast of the games and is made available to subscribers over the Internet.
  • the videos and the audios along with the metadata are processed and exported or transmitted to individual user or fan devices (e.g., smart devices) 50 to display custom videos based on input selections from the user or fan, in accordance to the method described in the above paragraphs, as shown in FIG. 40A .
  • the user or fan can, for example, make appropriate selections and take appropriate actions using a graphical user interface (GUI) 60 incorporated within the received custom video 62 .
  • GUI 60 can include semi-transparent navigation controls 61 with functionality that minimizes viewing distraction and reducing clutter, as shown in FIG. 40A .
  • the functionality provided through semi-transparent controls 61 of the GUI 60 allows a use or fan to control various inputs including stop and playback and rewind to first downs, big play, scoring plays, etc, as shown in FIG. 40B .
  • the control in the GUI 60 also allows a user to quickly navigate to home team or visitor team cameras, as well as listen to home team, visitor team or media (e.g., ESPN) radio broadcasts.
  • the semi-transparent controls 61 of the GUI 60 display upon a user touches the display of the smart device 50 and fade out if no input or action is received by the GUI 60 after a certain period of time (few seconds to few minutes) has elapsed. The period of time can be set by the user or fan as desired.
  • the system can select the best viewing angle for the user depending on the selection of the player made by the user. However, the user is also able to control the viewing angle by selecting an appropriate camera icon 63 , as shown in FIG. 40C .
  • the GUI 60 of the system also provides the functionality to send the game or the custom video being viewed by the user on the screen of the smart device 50 to a remote television by pressing on a button “Activate Mobile2TV” 64 , as shown in FIG. 40D .
  • FIG. 41 depicts another configuration of the GUI that is displayed a display device (e.g., a smart device) 50 of a user, according to an embodiment of the present invention.
  • a display device e.g., a smart device
  • the button “Activate Mobile2TV” 64 as shown in FIG. 40D
  • the video streamed on the smart device 50 is sent to a television 70 , as shown in FIG. 41 .
  • the GUI 60 on the smart device changes its configuration to display instead football field configuration with a plurality of icons 65 representing the position of the plurality of cameras around the football stadium. Therefore, in an embodiment, the custom video is played on the television but the control functionality provided by the GUI stays on the smart device.
  • the user is able, for example, to select a player to display on the screen.
  • the user is also able to select the camera by selecting (clicking or touching) the corresponding icon 66 on GUI 60 and thus be able to watch the video from that camera point of view.
  • the user has enhanced camera controls and has complete control over virtual movement (tilt, zoom, pan, etc.) of a virtual camera.
  • the user has greater control over forward and backward play via speedwatch functionality.
  • the user or fan plays the role of “a movie director” that has control over which view or camera to select.
  • the GUI 60 can further provide to the user or fan the functionality of taking screenshots or storing video clips and posting the video clips and/or the screen shots to social media.
  • the custom video is still played on the television screen 70 .
  • advertisement images, scoreboards and other graphics or writings may be inserted by the system (computer server) into the custom video automatically or as controlled by the computer server to provide additional revenue generation.
  • the advertisement or other graphics may be tailored depending on the player or team selected by the user.
  • the insertion of the advertisement is not control by the user but tailored by the computer server according to the team selected by the user. For example, if the home team is the Washington Redskins®, an advertisement showing a particular restaurant in Washington, D.C. can be displayed in addition to the custom video of the football game selected by the user.
  • dynamic video streams are generated from multiple geo-referenced fixed video cameras.
  • the video streams are custom generated according to inputs of the user such as a selection of a specific object or target of interest to be watched or followed.
  • the DVS dynamically tracks the targets of interest from camera to camera based on target locations.
  • the DVS provides at least one video stream for each object of interest using video(s) or images captured by one or more the plurality of cameras. Multiple targets of interest can be tracked simultaneously.
  • the method or methods described above can be implemented as a series of instructions which can be executed by a computer, the computer having one or more processors or computer processor units (CPUs).
  • the term “computer” is used herein to encompass any type of computing system or device including a personal computer (e.g., a desktop computer, a laptop computer, a tablet, a smartphone, or any other handheld computing device), or a mainframe computer (e.g., an IBM mainframe), or a supercomputer (e.g., a CRAY computer), or a plurality of networked computers in a distributed computing environment.
  • a personal computer e.g., a desktop computer, a laptop computer, a tablet, a smartphone, or any other handheld computing device
  • mainframe computer e.g., an IBM mainframe
  • a supercomputer e.g., a CRAY computer
  • the method(s) may be implemented as a software program application which can be stored in a computer readable medium such as hard disks, CDROMs, optical disks, DVDs, magnetic optical disks, RAMs, EPROMs, EEPROMs, magnetic or optical cards, flash cards (e.g., a USB flash card), PCMCIA memory cards, smart cards, or other media.
  • a computer readable medium such as hard disks, CDROMs, optical disks, DVDs, magnetic optical disks, RAMs, EPROMs, EEPROMs, magnetic or optical cards, flash cards (e.g., a USB flash card), PCMCIA memory cards, smart cards, or other media.
  • a portion or the whole software program product can be downloaded from a remote computer or server via a network such as the internet, an ATM network, a wide area network (WAN) or a local area network.
  • a network such as the internet, an ATM network, a wide area network (WAN) or a local area network.
  • the method can be implemented as hardware in which for example an application specific integrated circuit (ASIC) can be designed to implement the method.
  • ASIC application specific integrated circuit
  • databases can be used which may be, include, or interface to, for example, an OracleTM relational database sold commercially by Oracle Corporation.
  • Other databases such as InformixTM, DB2 (Database 2) or other data storage, including file-based, or query formats, platforms, or resources such as OLAP (On Line Analytical Processing), SQL (Standard Query Language), a SAN (storage area network), Microsoft AccessTM or others may also be used, incorporated, or accessed.
  • the database may comprise one or more such databases that reside in one or more physical devices and in one or more physical locations.
  • the database may store a plurality of types of data and/or files (e.g., images or video(s)) and associated data or file descriptions, administrative information, or any other data (e.g., metadata of the images or video(s)).
  • FIG. 35 depicts an example of a computer system (e.g., a computer server) for implementing one or more of the methods and systems of delivering or viewing a video of an area of interest centered on a physical location of an object being tracked, according to an embodiment of the present invention.
  • FIG. 35 is a schematic diagram representing a computer system 100 for implementing the methods, according to an embodiment of the present invention.
  • computer system 100 comprises a computer processor unit (e.g., one or more computer processor units) 102 and a memory 104 in communication with the processor 102 .
  • the computer system 100 may further include an input device 106 for inputting data (such as keyboard, a mouse, a joystick, a game controller, a touchscreen, etc.) and an output device 108 such as a display device for displaying results of the computation (e.g., computer monitor, tablet, smartphone, head mounted device HMD, etc.).
  • the computer system 100 may further include or be in communication with a storage device 110 for storing data such as, but not limited to, a hard-drive, a network attached storage (NAS) device, a storage area network (SAN), etc.
  • NAS network attached storage
  • SAN storage area network

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Human Computer Interaction (AREA)
  • Signal Processing (AREA)
  • Studio Devices (AREA)
  • Closed-Circuit Television Systems (AREA)

Abstract

A system and method of selecting an area of interest from a plurality of images captured by a plurality of cameras are described. The method includes receiving a request for an area of interest from a plurality of images, the area of interest containing an object of interest selected by a user, the object of interest being captured in one or more images captured by one or more of the plurality of cameras, the plurality of cameras having overlapping field of views; determining a camera within the plurality of cameras that wholly contains the area of interest; transforming the physical location of the object of interest into a pixel position such that the area of interest being essentially centered around the pixel position corresponding to the physical location of the object of interest; and extracting the area of interest.

Description

    CROSS-REFERENCE TO RELATED APPLICATIONS
  • The present application is a continuation of U.S. patent application Ser. No. 15/142,720, filed on Apr. 29, 2016, which claims the benefit of priority of U.S. Provisional Patent Application No. 62/155,005 filed on Apr. 30, 2015, the entire content of the foregoing applications is incorporated herein by reference.
  • BACKGROUND OF THE INVENTION Field of the Invention
  • The present invention pertains to image data management and in particular to a method and system of selecting a view or an area of interest (AOI) from a plurality of images captured by a plurality of cameras.
  • Discussion of Related Art
  • A relatively large image generally contains a plurality of pixels, e.g., millions of pixels. Each pixel has one, two or more bands. Each band has a certain color depth or bit depth. For example, an RGB color-based image has 3 bands, the red band (R), the green band (G) and the blue band (B). Each of the R, G and B bands can have a depth of 8 bits or more. Hence, in this example, each pixel can have a total bit depth of 24 bits or more.
  • A video or film camera captures a plurality of images or frames at a certain rate, i.e., a number of times per second (frames per second). Each captured image or frame, in a digital form, has a certain size defined by its pixel height and its pixel width. Pixel height corresponds to a number of rows of pixels in the image and pixel width corresponds to the number of columns in the image. Each image pixel has one or more bands. Each band can represent a quantized signal from a range of frequencies from the electromagnetic spectrum. For example, in the visible spectrum, the bands can represent different colors such as red, blue, and green (RGB).
  • An image sensor or camera can be used to capture a series of images or frames. Each image or frame in the series of images or frames can have thousands of pixels. Each image may have a relatively high resolution, such as 4K, 6K, 8K or more. As understood in the art, a 4K resolution refers to content or image(s) having horizontal resolution on the order of 4,000 pixels. Several 4K resolutions exist in the fields of digital television and digital cinematography. In the movie projection industry, DIGITAL CINEMA INITIATIVES (DCI) is the dominant 4K standard. A 4K resolution, as defined by DCI, is 4096 pixels×2160 pixels (approximately a 1.9:1 aspect ratio). An image of 4096 pixels by 2160 pixels has about 9 Megapixels (MP). As specified in standards for Ultra High Definition television, 4K resolution is also defined as 3840 pixels×2160 pixels (approximately a 1.78:1 aspect ratio). The following TABLE 1 provides examples of known standards for relatively high resolution images captured by industry standard camera sensors.
  • TABLE 1
    Resolution (W pixels
    wide by H pixels tall
    Format or W × H)
    Ultra high definition television (UHDTV) 3840 × 2160
    (Aspect Ratio is 1.78:1)
    Ultra wide television 5120 × 2160
    (Aspect Ratio is 2.33:1)
    WHXGA 5120 × 3200
    (Aspect Ratio is 1.6:1)
    DCI 4K (native) 4096 × 2160
    (Aspect Ratio is 1.90:1)
    DCI 4K (Cinemascope) 4096 × 1716
    (Aspect Ratio is 2.39:1)
    DCI 4K (flat cropped) 3996 × 2160
    (Aspect Ratio is 1.85:1)
    8K-UHD 7680 × 3200
    (Aspect Ratio is 2.4:1)
  • The images may be captured in sequence, for example at a reasonably constant frequency (e.g., 24 images or frames per second (fps), 48 fps, 60 fps, etc.). Each image (i.e., each still image or frame) in the sequence or series of images may have one or more distinct bands and may cover any part of the electromagnetic spectrum that can be captured by the image sensor or camera, including, the visible (VIS) spectrum, the infrared (IR) spectrum or the ultraviolet (UV) spectrum. The image sensor or camera may be a single sensor or a combination or a matrix of multiple smaller sensors or cameras that can be arranged to generate a larger image. Each smaller sensor or camera can be configured to capture a plurality of smaller images. The smaller images captured by the smaller sensors or cameras can then be combined (or stitched) to form a plurality of larger images.
  • In the media and entertainment industry, on network television, on cable television, on broadcast television, and on digitally distributed video content, some of the highest image pixel resolutions are known as 4K and 8K. For film, 4K is an image frame that is 4096 pixels wide by 2160 pixels tall. For ultra-high definition TV, 4K is an image frame that is 3840 pixels wide by 2160 pixels tall. In addition, 8K image frame sizes are also gaining ground. However, the most popular distribution formats are still 720P and 1080P which have image frame sizes of 1280 pixels wide by 720 pixels tall and 1920 pixels wide by 1080 pixels tall, respectively. Recently, 4K media playback devices have reached the mainstream market. These devices can also interpolate and display 1080P content.
  • Those skilled in the media and entertainment industry are aware of various existing and published standards in the industry. For example, in sport broadcasts, such as NASCAR®, NFL®, NHL®, NBA®, and FIFA®, with the advent of live and archive media streaming, the “second-screen” experience has gained popularity. For example, some sports venues have employed custom camera configurations that capture a video of a relatively high resolution that is larger than 1920 pixel wide and 1080 pixels tall. A video of image size 1920×1080 captured at 30 fps or 60 fps is also referred to as 1080P. For example, 3D-4U based in Seattle, Wash., USA, creates a very wide strip of video that is approximately 18000 pixels wide by 720 pixels tall or 1080 pixels tall. Using custom software and computers, they extract a smaller area of interest a size substantially smaller in pixel width and height than the captured strip content. This smaller area is optimized for display on a virtual reality environment or an APPLE iPad®.
  • However, none of the known image technologies or systems are able to provide or deliver a plurality of images or a video where an object in physical space captured in one or more images of the plurality of images can be mapped to pixels of the one or more images.
  • BRIEF SUMMARY OF THE INVENTION
  • An aspect of the present invention is to provide a method of selecting an area of interest from a plurality of images captured by a plurality of cameras, the method being implemented by a computer system that includes one or more processor units configured to execute computer program modules. The method includes receiving, by the one or more processor units, a request for an area of interest from a plurality of images, the area of interest containing an object of interest selected by a user, the object of interest being captured in one or more images captured by one or more of the plurality of cameras, the plurality of cameras having overlapping field of views such that images captured by the plurality of cameras have a plurality of overlapping regions, the request containing an identity of the object of interest; determining, by the one or more processor units, a camera within the plurality of cameras that wholly contains the area of interest using the physical location of the object of interest; transforming, by the one or more processor units, the physical location of the object of interest into a pixel position within an image of a specific camera within the plurality of cameras such that the area of interest being essentially centered around the pixel position corresponding to the physical location of the object of interest, and is wholly contained within the image captured by the specific camera; and extracting, by the one or more processor units, the area of interest substantially centered around the pixel position corresponding to the physical location of the object of interest.
  • Another aspect of the present invention is to provide a method of extracting an area of interest containing an object of interest from a plurality of images captured using a plurality of cameras, the method being implemented by a computer system that includes one or more processor units configured to execute computer program modules. The method includes determining, by the one or more processor units, using the data of the locator or the physical location of the object of interest, a camera within the plurality of cameras that wholly contains the area of interest, the area of interest containing the object of interest, the object of interest being captured in one or more images captured by one or more of the plurality of cameras, the plurality of cameras having overlapping field of views such that images captured by the plurality of cameras have a plurality of overlapping regions; transforming, by the one or more processor units, the physical location of the object of interest into a pixel position within an image of a specific camera within the plurality of cameras such that the area of interest is essentially centered around the pixel position corresponding to the physical location of the object of interest, and is wholly contained within the image captured by the specific camera; and extracting, by the one or more processor units, the area of interest substantially centered around the pixel position corresponding to the physical location of the object of interest.
  • A further aspect of the present invention is to provide a computer system for selecting an area of interest from a plurality of images captured by a plurality of cameras. The system includes one or more processor units configured to: receive a request for an area of interest from a plurality of images, the area of interest containing an object of interest selected by a user, the object of interest being captured in one or more images captured by one or more of the plurality of cameras, the plurality of cameras having overlapping field of views such that images captured by the plurality of cameras have a plurality of overlapping regions, the request containing an identity of the object of interest; determine a camera within the plurality of cameras that wholly contains the area of interest using the physical location of the object of interest; transform the physical location of the object of interest into a pixel position within an image of a specific camera within the plurality of cameras such that the area of interest being essentially centered around the pixel position corresponding to the physical location of the object of interest, and is wholly contained within the image captured by the specific camera; and extract the area of interest substantially centered around the pixel position corresponding to the physical location of the object of interest.
  • Various examples of implementations of the above methods and systems are provided in the following paragraphs including, but not limited to, implementation in a sports environment, gaming environment, law enforcement environment, etc.
  • Although the various steps of the method are described in the above paragraphs as occurring in a certain order, the present application is not bound by the order in which the various steps occur. In fact, in alternative embodiments, the various steps can be executed in an order different from the order described above or otherwise herein.
  • These and other objects, features, and characteristics of the present invention, as well as the methods of operation and functions of the related elements of structure and the combination of parts and economies of manufacture, will become more apparent upon consideration of the following description and the appended claims with reference to the accompanying drawings, all of which form a part of this specification, wherein like reference numerals designate corresponding parts in the various figures. In an embodiment of the invention, the structural components illustrated herein are drawn to scale. It is to be expressly understood, however, that the drawings are for the purpose of illustration and description only and are not intended as a definition of the limits of the invention. As used in the specification and in the claims, the singular form of “a”, “an”, and “the” include plural referents unless the context clearly dictates otherwise.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • In the accompanying drawings:
  • FIG. 1 depicts schematically a moving object being tracked by a tracking device attached to the object and a transmitter attached to the object transmitting tracking information, according to an embodiment of the present invention;
  • FIG. 2 depicts schematically the transmitter attached to the object transmitting the tracking information as a data stream to a receiver device, according to an embodiment of the present invention;
  • FIG. 3 depicts schematically the receiver device connected to a computer system that receives the data stream and stores it on a storage device, according to an embodiment of the present invention;
  • FIG. 4 depicts schematically a video camera, according to an embodiment of the present invention;
  • FIG. 5 depicts schematically the video camera collecting an image at an instant in time, according to an embodiment of the present invention;
  • FIG. 6 depicts schematically the video camera collecting an image at an instant in time where the camera is static (fixed in position) and collecting the image at an incident angle, and the image having a perspective view of object(s) being captured in the image, according to an embodiment of the present invention;
  • FIG. 7 depicts schematically the video camera collecting an image at an instant in time where the camera is static and collecting the image looking down on the object(s), and the image is a top-view of the field of view, according to an embodiment of the present invention;
  • FIG. 8 depicts schematically the video camera having sensors that provide a position of the camera, for example, GPS sensor, IMU sensor and/or any other type of sensors such as visual sensor, according to an embodiment of the present invention;
  • FIG. 9 depicts schematically an object being tracked and captured in an image at a location at an instant in time, in the field of view of the camera with a known location and orientation, the pixel position of the object in the image at that instant in time can be computed from the physical location of the object, according to an embodiment of the present invention;
  • FIG. 10 depicts a conversion from the physical location of the object, within the field of view of the camera, being tracked or captured, to the pixel location of the object within an image captured by the camera at substantially the same instant in time, according to an embodiment of the present invention;
  • FIG. 11 depicts schematically a video camera with various sensors that determine the position of the camera including, a GPS sensor, an IMU sensor or other types of sensors, according to an embodiment of the present invention;
  • FIG. 12 depicts schematically a video camera arranged in a matrix of a plurality of cameras such that the field of view of one camera substantially overlaps the field of view of an adjacent camera by a known amount, according to an embodiment of the present invention;
  • FIG. 13 depicts an image ICa captured by the camera, the image having a pixel width W and a pixel height H, according to an embodiment of the present invention;
  • FIG. 14 depicts an image ID selected for display on a display device D, where the pixel width of image ID is DW pixels and the pixel height of image ID is DH pixels, where ID is a portion of image ICa captured by the camera such that pixel width W>DW and pixel height H>DH, according to an embodiment of the present invention;
  • FIG. 15 depicts schematically an example of image ID and image ICa such that W/2>DW and H/2>DH, that is the pixel width and pixel height of the displayed area of interest from the camera image is at least half the pixel width and pixel height of the image captured by the camera, according to an embodiment of the present invention;
  • FIG. 16 depicts schematically an example of two adjacent cameras, each camera generating an image of pixel width W and pixel height H, with a horizontal overlap of OW pixels where OW≥DW, according to an embodiment of the present invention;
  • FIG. 17 depicts schematically an example of two adjacent cameras, each camera generating an image of pixel width W and pixel height H, with a horizontal overlap of OW pixels where OW≥DW, and four examples of an area of interest D1, D2, D3 and D4 such that each area of interest is wholly contained within the image captured by at least one camera, as a result of the overlap, according to an embodiment of the present invention;
  • FIG. 18 depicts schematically an example of two adjacent cameras, each generating an image of pixel width W and pixel height H, with a vertical overlap of OH pixels where OH≥DH, according to an embodiment of the present invention;
  • FIG. 19 depicts an example of two adjacent cameras, each generating an image of pixel width W and pixel height H, with a vertical overlap of OH pixels where OH≥DH, and depicts 4 examples of an areas of interest D1, D2, D3 and D4 such that each area of interest is wholly contained within the image captured by at least one camera, as a result of the overlap, according to an embodiment of the present invention;
  • FIG. 20 depicts schematically images generated by a matrix of a plurality of cameras where the cameras are configured such that the pixel data in one image generated by one camera is substantially identical to the pixel data of one image generated by an adjacent camera due to overlapping fields of view of one camera with the adjacent camera, according to an embodiment of the present invention;
  • FIG. 21 depicts schematically a pixel in an aggregated image generated from the matrix of cameras mathematically maps to a pixel within one of the constituent images from one or more of the cameras, according to an embodiment of the present invention;
  • FIG. 22 depicts schematically one image captured at one instant in time by a camera, resulting in the capture of a plurality of temporally sequential images captured over time at a known frame rate, according to an embodiment of the present invention;
  • FIG. 23 depicts a conversion of a plurality of temporally sequential images ICa captured by the camera into a video stream VCa in a known format and encoding, according to an embodiment of the present invention;
  • FIG. 24 depicts schematically a video stream VCa in a known format and encoding being decoded into a plurality of temporally sequential images ICa(decoded) that are substantially identical to the images originally captured by the camera and converted into the video stream VCa, according to an embodiment of the present invention;
  • FIG. 25 depicts schematically a desired area of interest or desired region D extracted from a plurality of decoded temporally sequential images ICa(decoded) captured over time T, according to an embodiment of the present invention;
  • FIG. 26 depicts schematically a plurality of images representing the desired area of interest or desired region D extracted from a plurality of decoded temporally sequential images ICa(decoded) captured over time T being converted into a video codestream VID of a known format and encoding, according to an embodiment of the present invention;
  • FIG. 27 depicts schematically a plurality of cameras arranged in a matrix such that the field of view of one camera substantially overlaps the field of view of an adjacent camera, and each camera generates a video containing some pixels in the video from one camera that are substantially identical to some pixels in the video from an adjacent camera because of overlapping fields of view, according to an embodiment of the present invention;
  • FIG. 28 depicts schematically an object being tracked within the field of view of at least one camera, at any given instant in time, and the physical location of the object can be represented by one or more pixels within the video generated by the at least one camera, according to an embodiment of the present invention;
  • FIG. 29 depicts schematically an object being tracked within the field of view of at least one camera, at any given instant in time, and the physical location of the object can be accurately represented by one or more pixels within the video generated by the camera, and depicts a pixel width and a pixel height of an overlapping region greater than or equal to the pixel width and pixel height of a desired area of interest centered around the one or more pixels representing the object being tracked, according to an embodiment of the present invention;
  • FIG. 30 depicts schematically a plurality of images comprising the desired area of interest centered around the pixel representation of the object being tracked being extracted from a plurality of video codestreams from a plurality of cameras arranged in a matrix of overlapping fields of view, according to an embodiment of the present invention;
  • FIG. 31 depicts that a plurality of images comprising of the desired area of interest centered around the pixel representation of the object being tracked are extracted from a plurality of video codestreams from a plurality of cameras arranged in a matrix of overlapping fields of view and then converted into a video codestream in a known format and encoding, according to an embodiment of the present invention;
  • FIG. 32 depicts schematically a plurality of images comprising the desired area of interest centered around the pixel representing the object being tracked being extracted from a plurality of video codestreams from a plurality of cameras arranged in a matrix of overlapping fields of view and converted into a video codestream in a known format and encoding and transmitted over a LAN, WAN or Internet to a computing device for decoding, viewing or image processing and analysis or display on a display device, according to an embodiment of the present invention;
  • FIG. 33 depicts a flowchart of a method of delivering a video of an area of interest centered on the physical position of an object being tracked live, according to an embodiment of the present invention;
  • FIG. 34 depicts a flowchart of a method of delivering a video of an area of interest centered on the physical position of an object being tracked from an existing archive for a requested time window, according to an embodiment of the present invention;
  • FIG. 35 depicts an example of a computer system for implementing one or more of the methods and systems of delivering or viewing a video of an area of interest centered on a physical location of an object being tracked, according to an embodiment of the present invention;
  • FIGS. 36A-36D depict an example of providing a video of an object of interest on-demand based on a selection of the object by a user, according to an embodiment of the present invention;
  • FIG. 37 depicts another example of providing a video of an object of interest on-demand based on a selection of the object by a user, according to another embodiment of the present invention;
  • FIG. 38 depicts another example of providing a video of an object of interest on-demand based on a selection of the object by a user, according to another embodiment of the present invention;
  • FIGS. 39A-39D depict an example of a display device (e.g., a smart device) for viewing a custom on-demand video of an object of interest, according to an embodiment of the present invention;
  • FIGS. 40A-40D depict an example of interaction of a user on graphical user interface (GUI) providing functionality to access a custom video, according to an embodiment of the present invention; and
  • FIG. 41 depicts another configuration of the GUI that is displayed a display device (e.g., a smart device) of a user, according to an embodiment of the present invention.
  • DETAILED DESCRIPTION OF EMBODIMENTS OF THE INVENTION
  • FIG. 1 depicts schematically a moving object being tracked by a tracking device attached to the object and a transmitter attached to the object transmitting tracking information, according to an embodiment of the present invention. As shown in FIG. 1, an object Om static or in movement (in motion) can be tracked using a tracking device or a locator beacon Lb. In an embodiment, a locator beacon Lb is an electronic device that uses one or more sources of electromagnetic frequency. In another embodiment, locator beacon Lb can be a detectable pattern or visual reference. An example of a locator beacon Lb can be a Global Positioning System (GPS) unit, an Inertial Measurement Unit (IMU), a BLUETOOTH device, a laser, an optical sensor, an image pattern or visual reference (a discernable color), a marking, or any combination thereof. The locator beacon Lb generates or helps generate or compute the absolute position OPi of object Om in three dimensions (3D) at a given instant in time Ti across time T.
  • The physical location or position of object Om in 3D space can be written as OPi(Xi,Yi,Zi). For example, Xi can be longitude, Yi can be latitude, Zi can be elevation. A reference point OPR(XR, YR, ZR) can be provided for the location OPi. The reference point OPR has coordinates (XR, YR, ZR) in the 3D space. For example, a reference point for longitude, latitude and elevation can be (0, 0, sea-level). The locator beacon Lb can generate or assist in the generation of a stream of ‘n’ position points OPi (Xi,Yi,Zi) over time T. Object (Om) can be static (immobile) or can be in motion. The object Om can be a shoe, a car, a projectile, a baseball, an American football, a soccer ball, a helmet, a horse, a ski or ski-boot, a skating shoe, etc. The object can be an aquatic, a land-based, or an airborne lifeform. The object can also be a manmade item that is tagged with the locator beacon. The Object Om may also be provided with a wired or wireless transmission device Dxmit. For example, the device Dxmit can be configured to transmit a data stream from locator device Lb to a receiver using some known protocol, e.g. HTTP, TCP or HTTPS.
  • FIG. 2 depicts schematically the transmitter attached to the object transmitting the tracking information from locator device Lb as a data stream to a receiver device, according to an embodiment of the present invention. As shown in FIG. 2, the data stream or collection of readings captured by locator beacon Lb is transmitted by transmission device (transmitter) Dxmit and received by receiver device (receiver) Drcv. The receiver device (receiver) Drcv can be a wired device, a wireless device, or a cellular device.
  • FIG. 3 depicts schematically the receiver device Drcv connected to a computer system Co that receives the data stream and stores it on a storage device S, according to an embodiment of the present invention. As shown in FIG. 3, the receiver Drcv sends the data to a computer system Co which in turn stores the received data on to a storage device S (e.g., a hard-drive, a network attached storage (NAS) device, a storage area network (SAN)). The data includes the data from locator device Lb, identifier of locator device Lb, metadata, the physical location or position of object Om in 3D space OPi(Xi,Yi,Zi), derived data, or any related data.
  • FIG. 4 depicts schematically a video camera, according to an embodiment of the present invention. FIG. 5 depicts schematically the video camera collecting an image at an instant in time, according to an embodiment of the present invention. As shown in FIG. 5, the camera Ca (e.g., a conventional video camera) depicted in FIG. 4 captures a plurality of temporally sequential images ICa. As further shown in FIG. 5, the camera image comprises a plurality of pixels CPi(X, Y), where i is an integer number referring to a specific pixel i and x and y referring to the position or coordinates of the pixel i. In an embodiment, the camera Ca captures a video with relatively large image size such as, for example, a 4K or 8K, or greater.
  • FIG. 6 depicts schematically the video camera collecting an image at an instant in time where the camera is static (fixed in position) and collecting the image at an incident angle, and the image having a perspective view of object(s) being captured in the image, according to an embodiment of the present invention. As illustrated in FIG. 6, if camera Ca is locked into a position such that it is at an angle, the resulting image coverage is trapezoidal where the objects closer to the camera appear larger and those farther from the camera appear smaller.
  • FIG. 7 depicts schematically the video camera collecting an image at an instant in time. The camera is static and collects the image looking down on the object(s), and the image is a top-view of the field of view, according to an embodiment of the present invention. In FIG. 7, the position of the camera is looking down from an elevated position onto a field of view containing objects.
  • FIG. 8 depicts schematically the video camera having sensors that provide a position of the camera, for example, a GPS sensor, an IMU sensor and/or any other type of sensors such as visual sensor, according to an embodiment of the present invention. As shown in FIG. 8, camera Ca can also be equipped with an optional GPS, IMU or other positioning sensors. In an embodiment, the orientation, pan, tilt and zoom values of camera Ca remain constant. In this case, the absolute position of the camera can be used to link the position of each pixel in the image captured by the camera to the physical space or field of view being captured by the camera Ca.
  • FIG. 9 depicts schematically an object being tracked and captured in an image at a location at an instant in time, in the field of view of the camera Ca with a known location and orientation, the pixel position of the object in the image at that instant in time can be computed from the physical location of the object, according to an embodiment of the present invention. As shown in FIG. 9, if the location and orientation of the camera is known, a tracked point OPi(Xi,Yi,Zi) in 3D space at time Ti, that falls within the field of view of the camera Ca has a corresponding projected point CPi(X,Y) within an image captured by the camera Ca at about the same time, OPi(Xi,Yi,Zi)→CPi(X, Y).
  • FIG. 10 depicts a conversion from the physical location of the object, within the field of view of the camera, being tracked or captured, to the pixel location of the object within an image captured by the camera at substantially the same instant in time, according to an embodiment of the present invention. As shown in FIG. 10, image processing, image tracking, and other image-rectification methods (such as ortho-rectification) can be used to accurately convert from the physical position OPi of object Om at any time Ti to an absolute pixel location CPi(X, Y), such that OPi(Xi,Yi,Zi)→CPi(X, Y), within a frame of video that is captured at time Tj, where, Ti=Tj or Ti≈Tj. It is assumed that physical position OPi(Xi,Yi,Zi) is within the field of view of camera Ca. The computation for image distortion and subsequent orthographic adjustment can also be performed if desired by the user. This data is computed by computer Co and stored on storage S. In an embodiment, conventional processing and image tracking methods can be used to accomplish this conversion.
  • FIG. 11 depicts schematically a video camera with sensors that determine the position of the camera such as, a GPS sensor, an IMU sensor or other types of sensors, according to an embodiment of the present invention. FIG. 11 depicts a camera Ca that may be identical to the one shown in FIG. 8.
  • FIG. 12 depicts schematically a matrix of a plurality of cameras such that the field of view of one camera substantially overlaps the field of view of an adjacent camera by a known amount, according to an embodiment of the present invention. An array of such cameras can be arranged in a matrix configuration comprising of rows 1 to r and columns 1 to c where Ca(i,j) is one camera in the matrix. Hence, cameras Ca(i+1,j) and Ca(i,j+1), are adjacent cameras to Ca(i,j). The indices i and j represent the row and column numbers in the matrix of cameras, respectively. In an embodiment, the cameras can be mounted in such a way that the field of view of one camera Ca(i,j) overlaps (has at least some overlap) with the field of view of an adjacent camera. The quantity of overlap can be configured and selected as needed. For example, the cameras can have an overlap such that at least 50% of the pixels in the horizontal direction for one camera are substantially identical to at least 50% of the pixels in the horizontal direction for an adjacent camera to the left or right of the one camera. In another example, the cameras can have an overlap such that at least 50% of the pixels in the vertical direction for one camera are substantially identical to at least 50% of the pixels in the vertical direction for an adjacent camera to the top or bottom of the one camera.
  • FIG. 13 depicts an image ICa captured by one camera in the matrix of cameras, the image having a pixel width W and a pixel height H, according to an embodiment of the present invention. As shown in FIG. 13, the image ICa as captured by such camera Ca has pixel width W and pixel height H. The pixel width W and pixel height H of the image captured by the camera can be selected as desired by the user, for example, using one of the settings of the camera.
  • FIG. 14 depicts an image ID selected for display on a display device D, where the pixel width of displayed image ID is DW pixels and the pixel height of image ID is DH pixels, according to an embodiment of the present invention. The displayed image ID is a portion of captured image ICa captured by the camera such that pixel width W of captured image ICa is greater than the pixel width DW of displayed image ID (W>DW) and pixel height H of captured image ICa is greater than pixel height DH of displayed image ID (H>DH. Although, the term “displayed” is used in this example to indicate an image displayed on a display device such as a computer screen, as it must be appreciated that the term “displayed image” can also encompass an image that is transformed or transmitted or otherwise processed and is not limited to only displaying the image on a display device.
  • FIG. 15 depicts schematically an example of image ID and image ICa such that W/2≥DW and H/2≥DH. In other words, the pixel width and pixel height of the displayed area of interest from the camera image is at least half the pixel width and pixel height of the image captured by the camera, according to an embodiment of the present invention. As shown in FIG. 15, the image ID is a sub-image of ICa where ID is an area of interest within ICa. For example, if W is 3840, H is 2160, then DW is 1920 or smaller and DH is 1080 or smaller. Note that a display device D does not need to be a computer monitor or computer screen. It can be a window on a screen of a computer running the Microsoft WINDOWS operating system or Apple Mac OS-X operating system. This window may display an image of any pixel width and height. For example, display D can be a window that has the following dimensions, DW is 960 pixels and DH is 540 pixels.
  • In an embodiment, when using a matrix of cameras as shown in FIG. 12, the cameras are synchronized with each other. A first camera covers a first field of view and a second camera adjacent to the first camera covers a second field of view to the left or to the right of the first field of view. The first and second cameras are temporally synchronized, for example by using a ‘genlock’ signal. This implies that the first camera captures a first image at a first instant in time, and the second camera captures a second image at a second instant in time and the first instant in time and the second instant in time are substantially the same.
  • FIG. 16 depicts schematically an example of images captured by two adjacent cameras (e.g., the first camera and the second camera), each camera generating an image of pixel width W and pixel height H, with a horizontal overlap between the images of OW pixels where OW≥DW, according to an embodiment of the present invention. In other words, the first image captured by the first camera and the second image captured by the second camera overlap with a certain amount overlap OW. In an embodiment, as shown in FIG. 16, the first image has pixels in the horizontal direction, along the width of the first image that are substantially identical to the pixels in the horizontal direction, along the width of the second image. This is referred to as a pixel overlap or image overlap or overlapping image. The overlap is OW pixels wide. In an embodiment, the overlap OW is greater than or equal to the width DW of the display device or window shown displayed on the display device (i.e., OW≥DW).
  • FIG. 17 depicts schematically an example of two images generated by two adjacent cameras, each camera generating an image of pixel width W and pixel height H, with a horizontal overlap between the two images of OW pixels where OW≥DW, according to an embodiment of the present invention. FIG. 17 further depicts four examples of an area of interest D1, D2, D3 and D4 such that each area of interest is wholly contained within the image captured by at least one camera, as a result of the overlap, according to an embodiment of the present invention. As shown in FIG. 17, if the areas of interest D1, D2, D3 or D4 is such that OW≥DW, then the area of interest D1, D2, D3 or D4 is wholly contained within at least the image ICa of one camera. As shown in FIG. 17, area of interest (AOI) D1 is contained within a first half of the first image ICa(0,0), AOI D2 is contained in both the second half of the first image as well as first half of the second image in the overlap area OW, ICa(0,0) and ICa(0,1), AOI D3 is contained within the second half of the second image, ICa(0,1). A portion of AOI D4 is contained within the second half of the second image and another portion of AOI D4 is contained within the first half of the second image and the second half of the first image, in the overlap area OW.
  • In a similar fashion, a first camera may cover a first field of view and a second camera may cover a second field of view above or below the first field of view of the first camera. The first and second cameras are temporally synchronized, for example by using a ‘genlock’ signal. This implies that the first camera captures a first image at a first instant in time, and the second camera captures a second image at a second instant in time and the first instant in time and the second instant in time are substantially the same.
  • FIG. 18 depicts schematically an example of two images from two adjacent cameras (e.g., the first camera and the second camera), each camera generating an image of pixel width W and pixel height H, with a vertical overlap of OH pixels where OH≥DH, according to an embodiment of the present invention. In other words, a first image captured by the first camera and a second image captured by the second camera overlap with a certain amount OH. In an embodiment, as shown in FIG. 18, the first image has pixels in the vertical direction, along the height of the first image that are substantially identical to the pixels in the vertical direction, along the height of the second image. This is referred to as a pixel overlap or image overlap or overlapping image. The overlap is OH pixels wide. In an embodiment, the overlap OH is greater than or equal to the height DH of the display device or window (OH≥DH).
  • FIG. 19 depicts an example of two adjacent first and second images from first and second cameras, each generating an image of pixel width W and pixel height H, with a vertical overlap of OH pixels where OH≥DH, and depicts 4 examples of an areas of interest D1, D2, D3 and D4 such that each area of interest is wholly contained within the image captured by at least one camera, as a result of the overlap, according to an embodiment of the present invention. As shown in FIG. 19, if the area of interest D1, D2, D3 or D4 is such that OH≥DH, then the area of interest D1, D2, D3 or D4 is wholly contained within at least the image ICa of one camera. As shown in FIG. 19, area of interest (AOI) D1 is contained within a first half of the first image ICa(0,0), AOI D2 is contained in both the second half of the first image as well as first half of the second image in the overlap area OH, ICa(0,0) and ICa(1,0), AOI D3 is contained within the second half of the second image, ICa(1,0). A portion of AOI D4 is contained within the second half of the second image and another portion of AOI D4 is contained within the first half of the second image and the second half of the first image, in the overlap area OH.
  • FIG. 20 depicts schematically images generated by a matrix of a plurality of cameras where the cameras are configured such that the pixel data in one image generated by one camera is substantially identical to the pixel data of one image generated by an adjacent camera due to overlapping fields of view of one camera with the adjacent camera, according to an embodiment of the present invention. FIG. 20 depicts the matrix of cameras of r-rows and c-columns, arranged to be adjacent to each other with overlapping fields of view, such that the images generated by one camera overlaps with the images generated by an adjacent camera in the horizontal plane and/or in the vertical plane. The images generated by the plurality of adjacent cameras (or matrix of cameras) form a larger aggregated image having a width Wfull and a height Hfull. The notional aggregated image is simply logical in nature as it is not physically generated by stitching images from each camera together to form a larger image. The row and column values or indices in the matrix of images from the plurality of cameras can be used for further computation.
  • FIG. 21 depicts schematically a pixel in an aggregated image generated from the matrix of cameras mathematically maps to a pixel within one of the constituent images from one or more of the cameras, according to an embodiment of the present invention. As illustrated in FIG. 21, pixel CPi(Xfull,Yfull) in the resulting notional aggregated image maps to an actual pixel CPi(X,Y) which belongs to an image within a specific camera. The position of this pixel can be derived mathematically.
  • FIG. 22 depicts schematically one image captured at one instant in time by a camera, resulting in the capture of a plurality of temporally sequential images captured over time at a known frame rate, according to an embodiment of the present invention. A single image ICa is captured by a camera at a given instant in time Ti. The plurality of temporally sequential images are captured by one camera at a known frame rate Hz (frame/second). If all cameras are synchronized, all cameras generate such data at substantially the same rate Hz and substantially same instants in time.
  • FIG. 23 depicts a conversion of a plurality of temporally sequential images ICa captured by one camera into a video stream VCa in a known format and encoding, according to an embodiment of the present invention. As shown in FIG. 23, each image ICa captured by one camera Ca can be encoded inside or outside the camera into a video stream VCa. Therefore, video stream VCa can be delivered from the camera at a known frame rate (Hz) and in a known compressed or uncompressed format. As illustrated in FIG. 23, the sequence of images, each image being ICa, is converted to video stream VCa. The video stream VCa can be stored in primary or secondary computer memory or stored in a storage device for further processing or transmission. The video stream VCa can, for example, be displayed on a display device.
  • FIG. 24 depicts schematically a video stream VCa in a known format and encoding being decoded into a plurality of temporally sequential images ICa(decoded) that are substantially identical to the images originally captured by the camera and converted into the video stream VCa, according to an embodiment of the present invention. As shown in FIG. 24, video VCa can be decoded by another device into a plurality of temporally sequential images, each image ICa(decoded) being substantially similar to the corresponding original image ICa used to create VCa. Video VCa is delivered from a camera at a known delivery or capture frame rate (in Hz) and in a known compressed or uncompressed format. The sequence of images in the Video are decoded from the video at a known decoding frame rate. The decoding frame rate can be the same or different from the delivery or capture rate of the video.
  • FIG. 25 depicts schematically a desired area of interest or desired region D extracted from a plurality of decoded temporally sequential images ICa(decoded) captured over time T, according to an embodiment of the present invention. FIG. 25 depicts that a desired area of interest ID can be extracted from an image ICa(decoded). A plurality of areas of interest similar to ID can be extracted from a plurality of images similar to ICa(decoded).
  • FIG. 26 depicts schematically a plurality of images representing the desired area of interest or desired region D extracted from a plurality of decoded temporally sequential images ICa(decoded) captured over time T being converted into a video codestream VID of a known format and encoding, according to an embodiment of the present invention. The extracted plurality of areas of interest images, each image being an area of interest image ID from ICa(decoded), can then be encoded to a video codestream VID of known format.
  • FIG. 27 depicts schematically a plurality of cameras arranged in a matrix configuration such that the field of view of one camera has at least some overlap with the field of view of an adjacent camera, and each camera generates a video containing some pixels in the video from one camera that are substantially identical to some pixels in the video from an adjacent camera because of overlapping fields of view, according to an embodiment of the present invention. For example, in an embodiment, the plurality of cameras can be configured to overlap by a width of OW pixels and a height of OH pixels, where, for example, OW is at least W/2 (i.e., greater than or equal to half the width W of an image captured by one camera) and OH is at least H/2 (i.e., greater than or equal to half the height H of an image captured by one camera). The plurality of cameras can be synchronized with each other such that the instant in time when an image is captured by one camera is substantially the same as the instant in time when an image is captured by all other cameras. The cameras can also be color calibrated to substantially match each other's color space. In an embodiment, all cameras are configured to generate video streams, depicted as VCa1, VCa2, VCa3, and so on. As a result, a video matrix of overlapping video streams can be obtained from the matrix of cameras which capture overlapping images/videos.
  • FIG. 28 depicts schematically an object being tracked within the field of view of at least one camera, at any given instant in time, and the physical location of the object can be represented by one or more pixels within the video generated by the at least one camera, according to an embodiment of the present invention. As shown in in FIG. 28 and previous figures, any point OPi(Xi,Yi,Zi) in the physical space, that is within the field of view of a camera at any given instant in time Ti, can be transformed into pixel space CPi(X,Y) for a specific camera. Since the camera can be identified, the video that is generated by the camera, and the instant in time within that video can be found using any conventional method. If point OPi(Xi,Yi,Zi) is the location in the physical space of an object Om being tracked, then the pixel location of object Om can be projected to any pixel CPi(X,Y) with a video stream being captured by a camera that is in turn part of the camera matrix, provided OPi(Xi,Yi,Zi) is within the field of view of at least one camera within the camera matrix.
  • It is well known to those skilled in the art that the values of X, Y, Z and T of a plurality of points collected over time can be interpolated using conventional interpolation methods such as Spline interpolation. A point OPi(Xi, Yi, Zi) that depicts the approximate location of the object Om being tracked can be generated at each instant in time Ti when an image was captured by the camera. As time T moves forward, a moving object with coordinates OPi(Xi, Yi, Zi) may traverse one or more overlapping cameras. Each dot shown in FIG. 28 represents a position of the object at specific instant in time.
  • FIG. 29 depicts schematically an object being tracked within the field of view of at least one camera, at any given instant in time, and the physical location of the object can be accurately represented by one or more pixels within the video generated by the camera, and depicts a pixel width and a pixel height of an overlapping region greater than or equal to the pixel width and pixel height of a desired area of interest centered around the one or more pixels representing the object being tracked, according to an embodiment of the present invention. As time T moves in the forward direction, a moving object may traverse one or more overlapping cameras. Each dot in FIG. 29 represents a position of the object captured by at least one camera at a certain point in time. As shown in FIG. 29, In an embodiment, each tracked point OPi in the physical space, measured or interpolated, can be mapped to a pixel position CPi of that point from an image frame ICa(decoded) from video VCa captured by a camera Ca. An area of interest image ID that is substantially centered on pixel position CPi can then be selected (i.e., around the location of the object of interest). The constraint on the pixel width DW and pixel height DH of area of interest ID is that DW≤OW and DH≤OH, where OW is the amount of overlap in camera video pixels in the horizontal direction and OH is the amount of overlap in camera video pixels in the vertical direction. Each area of interest ID for each tracked point in the plurality of tracked points can be computed to exist in its entirety within the video generated by at least one camera. If, for example, the area of interest is located with the overlap region, then the area of interest is within both an image captured by a first camera in the plurality of cameras and another image captured by a second camera in the plurality of cameras.
  • FIG. 30 depicts schematically a plurality of images comprising the desired area of interest centered around the pixel representation of the object being tracked being extracted from a plurality of video codestreams from a plurality of cameras arranged in a matrix of overlapping fields of views, according to an embodiment of the present invention. As shown in FIG. 30, In an embodiment, a plurality of area of interest images ID can be extracted from videos captured by one or more cameras. The images ID corresponding to the area of interest that are extracted from the videos VCa1, VCa2, etc. can be arranged in sequence or encoded to form a video (e.g., a time-lapse video), as described in the following paragraph.
  • FIG. 31 depicts a plurality of images comprising the desired area of interest centered around the pixel representation of the object being tracked are extracted from a plurality of video codestreams from a plurality of cameras arranged in a matrix of overlapping fields of view and then converted into a video codestream in a known format and encoding, according to an embodiment of the present invention. As shown in FIG. 31, In an embodiment, a plurality of area of interest images, each image being similar to ID, can be encoded into a video codestream VID of a known format and encoding.
  • FIG. 32 depicts schematically a plurality of images comprising the desired area of interest centered around the pixel representing the object being tracked being extracted from a plurality of video codestreams from a plurality of cameras arranged in a matrix of overlapping fields of view and converted into a video codestream in a known format and encoding and transmitted over a LAN, WAN or Internet to a computing device for decoding, viewing or image processing and analysis or display on a display device, according to an embodiment of the present invention. As shown in FIG. 32, In an embodiment, a first video codestream VID of a known format and encoding can be transmitted to a first consumer application such as an APPLE iPad® application or an ANDROID application or a desktop computer software application or a client software program, where the video VID is generated to follow a first object Om and the first object Om is selected for being followed by the first consumer application. In another embodiment, a second video codestream VID of a known format and encoding can be transmitted to a second consumer application such as an APPLE iPad® application or ANDROID application or a desktop computer software application or a client software program, where the video VID is generated to follow a second object Om and the second object Om is selected for being followed by the second consumer application. The second object can be the same or different from the first object.
  • FIG. 33 depicts a flowchart of a method of delivering a video of an area of interest centered on the physical position of an object being tracked, according to an embodiment of the present invention. In an embodiment, a client requests for a live video that follows an object Om where the pixel requested dimensions of the video are DW pixel wide by DH pixels high. At S1100, a user selects an object Om to be followed and viewed for display or processing D, a user being a human or a computer software program or other interface element or device. At S1120, client software sends the identity of Om and the specification of D, for example width DW and height DH (e.g., display window dimensions). For example, the user may send the dimensions of a screen of tablet screen or a screen of smart phone screen or a window within on a screen of personal computer, or television (TV) screen or a window or portion of the TV screen, etc. The client sends this request over LAN, WAN or the Internet and may use the HTTP or secure HTTP (HTTPS) or any other computer network protocol such as TCP, ATM, etc. At S1140, server software receives the request. In an embodiment, for all subsequent operations by the server, the server may choose to exit upon error and report an exception with an error message, if an error occurs.
  • At S1160, the server validates the request from the client software. The validation includes determining that object Om is indeed within the field of view of at least one camera within the matrix of cameras and that the area of interest DW and DH dimensions (for example, corresponding to the display dimensions) follow the constraints on the overlapping pixel width OW and overlapping pixel height OH for each video, for each camera. That is DW≤OW and DH≤OH, where OW is the amount of overlap in camera video pixels in the horizontal direction and OH is the amount of overlap in camera video pixels in the vertical direction). In an embodiment, the cameras are static and do not pan, tilt or zoom. In an embodiment, the cameras are color-matched and synchronized. The cameras are arranged in a matrix such that there is sufficient overlap of the field of view of the cameras so that the overlap in pixels of the captured images or videos is at least OW pixels wide and OH pixels tall.
  • In an embodiment, after performing the validation at S1160, the server finds data, at S1180, from the locator beacon Lb for object Om. The server searches for and finds the most recent data from locator Lb. Within the data from locator Lb, the server finds or computes physical object location OPi at S1200. At S1220, the server uses data from the locator to compute the camera Ca within the matrix of cameras, that wholly contains an area of interest that is DW pixels wide by DH pixels tall and centered substantially at pixel coordinate or position CPi. CPi is the pixel coordinate in an image, at a specific instant in time, from the video captured by the camera Ca corresponding to physical object location OPi of object Om captured by camera Ca. The server transforms the physical position OPi(Xi, Yi, Zi) obtained from the locater beacon data Lb into pixel position CPi(Xi,Yi) within an image for a specific camera such that an area of an image of size DW×DH which is substantially centered around position CPi is wholly contained within the camera image.
  • At S1240, an image at the specific instant in time is decoded from the selected video from the selected camera. The video is live. At S1260, the area of interest that is DW pixels wide by DH pixels tall is extracted from the decoded image. The object Om is substantially at the center of the decoded image. If further image processing is required to correct the image for distortions, it is performed at S1280. The area of interest image that is DW pixels wide by DH pixels tall is then encoded into the video codestream VID. The video codestream is for example stored in a computer memory buffer, at S1300. At S1320, when the computer memory buffer has sufficient data for transmission, the computer transmits the data to the client that made the request. At S1340, the method repeats the searching of the storage for location physical object location OPi at S1200 to get the next physical or interpolated location of OPi from locator beacon data Lb that was computed by Co and stored on storage device S and repeats the next steps at S1220, S1240, S1260, S1280, S1300 and S1300, as needed, until the client cancels the requests or ends the processing or closes the connection, at S1340. In case the client closes the connection or cancels the request or ends the processing, the method ends at S1340.
  • FIG. 34 depicts a flowchart of a method of delivering a video of an area of interest centered on the physical position of an object being tracked from an existing archive for a requested time window, according to an embodiment of the present invention. The method starts when a client requests for an existing archived video that follows an object Om where the pixel requested dimensions of the video are DW pixel wide by DH pixels tall. At S2100, a user selects an object Om to be followed and viewed for display or processing D. The user can be a human or a computer software program or other interface element. The user also selects a time window Tin to Tout between which the user wishes to follow the object, where Tin<Tout. This can be, for example, as a result of a request to replay a portion of a video after the event has occurred. At S2120, client software sends the identity of Om and the specification of D, for example DW and DH and the time window Tin to Tout. In one embodiment, the client may send this request over a local area network (LAN), a wide area network (WAN) or the Internet and may use the HTTP or secure HTTP (HTTPS) or any other computer network protocol such as TCP, ATM, etc. At S2140, the server software receives the request. The server may choose to exit is an error occurs and report an exception with an error message. At S2160, the server validates the request from the client software. Validation includes verifying that there is content for the specific time window, and/or determining whether Om is indeed within the field of view of at least one camera within the plurality of cameras and determining whether the area of interest dimensions DW and DH is contained within the overlapping pixel width OW and within an overlapping pixel height OH for each video from each camera. In an embodiment, the cameras are static and do not pan, tilt or zoom. In an embodiment, the cameras are color-matched and/or synchronized. The cameras are arranged in a matrix such that there is sufficient overlap between the field of view of the cameras so that the overlap in pixels of the resulting video is at least OW pixels wide and OH pixels tall.
  • After validation, at S2180, the server finds data from the locator beacon Lb for object Om. At S2200, the server enters a loop for each time instant Ti starting from Tin to Tout in steps of dT where dT is 1/Hz (Hz is the video camera capture frame rate). At S2220, the server searches storage device S for and finds the data from locator beacon Lb at time Ti. At S2220, within that data for Lb, the server finds or computes the physical location OPi of object Om. At S2240, the server computes the camera Ca within the matrix of cameras, that wholly contains an area of interest that is DW pixels wide by DH pixels tall and centered substantially around pixel position CPi in the image of video containing the captured object Om that corresponds to physical OPi of object Om. In other words, CPi is the pixel coordinate in an image, at a specific instant in time, from the video captured by the camera Ca corresponding to physical object location OPi of object Om captured by camera Ca.
  • At S2260, an image at the specific instant in time is decoded from the selected video from the selected camera. The video is archived on a storage device S from a prior capture. At S2280, the area of interest that is DW pixels wide by DH pixels tall is extracted from the decoded image. CPi is substantially at the center of this image. Thus object Om is substantially captured at the center of the image. If further image processing may be needed to correct the image for distortions, it is performed at S2300. The area of interest image that is DW pixels wide by DH pixels tall is then encoded into the video codestream VID and stored, for example, in a computer memory buffer, at S2320. At S2340, when the computer memory buffer has sufficient data for transmission, for example, the server transmits the data containing the video codestream VID in the memory buffer to the client that made the request. At S2360, the method goes to S2200 and repeats the procedures S220, S2240 and S2260, as needed, to get the next physical or interpolated location of OPi from location Lb at the next time Ti, which is computed by server Co and stored on storage S. If and/or when Ti reaches time Tout or the client closes the connection or cancels the request, the method ends at S2360.
  • In the following paragraphs, some examples of implementation of the present invention will be described. FIGS. 36A-36D depict an example of providing a video of an object of interest on-demand based on a selection of the object by a user, according to an embodiment of the present invention. For example, a plurality of cameras (not shown in FIG. 36A) can be distributed around the race track (e.g., NASCAR race track) 12 shown in FIG. 36A. The cameras are static (do not tilt, do not pan and do not zoom) and thus each camera has one orientation and captures a specific field of view. For example, the plurality of cameras can be arranged so as to cover the entire track 12. As shown in FIG. 36D, adjacent cameras 20A and 20B have overlapping field of view (FOV) 21A and 21B so as to capture images or videos containing objects of interest in overlapping region 21C, for example. An object (in this case a race car) 14 having a location tracking device or locator beacon such as a geo positioning system (GPS) location device, upon selection by a user, can be followed by the user along the race track 12. Each camera 20A and 20B will capture a series of sequential images or frames within its specific field of view 21A and 21B, respectively. As it can be appreciated, the overlap region 21C can provide a smooth transition from the sequence of frames captured by the camera 20A to the sequence of images or frames captured by the camera 20B during the transition of the race car 14 from the field of view 21A of camera 20A to the field of view of camera 20B in the race track 12. As depicted in FIGS. 36B and 36C, a user can employ a viewing device such as a tablet 16 or a smartphone 18, etc. As illustrated in FIGS. 36B and 36C, a first user may be following and viewing selected race car 14A on display device (e.g., the tablet) 16 while a second user may be following and viewing a different selected race car 14B on display device (e.g., the smartphone) 18. Each user can select a car of interest as desired by activating or pointing on the car of interest using a pointer device or the like (or a finger touch on a touch sensitive device such as a tablet, for example). As shown in FIGS. 36B and 36C, the users can view completely distinct sets of images or videos. While a first user with display device 16 is viewing race car 14A racing in the race track 12, a second user has just witnessed a collision between race car 14B and other race cars and a track barrier in the race track 12. Although, as illustrated in this example, each user is viewing a different set of images or videos, other users may select the same race car and thus be able to follow the same race car (object) and thus view the same set of images. In addition, other users may select a race car that will be within the same field of view as another car in the race track (for example, two cars close to each other or passing each other) and thus the user will be able to view in at least a period of time images or videos containing both the race car of interest to one user and the race car of interest to another user. Although an example of implementation is provided herein while using a car race (e.g. NASCAR, FORMULA 1), the same can be applied to a derby or horse race, a boat race, a motorbike race, a bicycle race, or a marathon race or other track run, a ski race, a skating race, etc.
  • FIG. 37 depicts another example of providing a video of an object of interest on-demand based on a selection of the object by a user, according to another embodiment of the present invention. For example, video tracking of a suspect car by a law enforcement agency can be accomplished by using the method described herein. For example, upon selection by a law enforcement officer of a car of interest or person of interest, the system is able to deliver a video showing and following the car or person of interest on a display device of the law enforcement officer without the law enforcement officer switching between a plurality of display devices or manually switching between cameras as is performed by conventional methods and systems. FIG. 37 depicts a geographical map 30 of a city showing a path 31 taken by a car of interest 32 tracked by the law enforcement agency. The law enforcement officer can track the car of interest 32 on his display device (e.g., a tablet, a computer screen or the like) along the path 31 taken by the car of interest 32. There are many cameras placed in various locations along streets within the city. For example, the cameras can be placed on specific buildings or structures while pointing to specific directions to capture specific field of views. The system and method described herein is able to determine a camera within the plurality of cameras that wholly contains the car of interest (tracked by the law enforcement officer) using the physical location of the car (acquired from the locator beacon such as a GPS device on the car) and extract the area of interest corresponding to the physical location of car of interest, as shown in images 34A-E. The system and the method can deliver images or a video of the car of interest 32 while the car 32 is driving in the city. The system is able to “switch” from images captured by one camera to an adjacent camera seamlessly to provide a continuous video showing the car 32 driving in the streets of the city and thus allowing the law enforcement officer to assess the position of the car 32 on the map 30. If a camera is not operating properly or there is a “blind area” or a field of view not captured by any of the cameras in the city, the system can switch or revert back to a map mode to display a portion 30A of the map 30 or the position of the car of interest on the portion 30B of the map 30, depending on the location of the car 32, as shown in images 35A-B.
  • FIG. 38 depicts another example of providing a video of an object of interest on-demand based on a selection of the object by a user, according to another embodiment of the present invention. FIG. 38 shows a virtual football field 40 for fantasy team gameplay. A plurality of cameras (not shown) are arranged in a plurality of football stadiums (e.g., NFL stadium). Each camera in the plurality of cameras captures images within a specific field of view in a specific football stadium. In an embodiment, the field of views of adjacent cameras in any football stadium can be arranged so as to overlap to provide an overlap area or zone. The various videos from the different cameras in each stadium can be stored in a storage device and, as described above, the position of a player on the football stadium is linked to an image of the football player stored in the video captured by one or more of the plurality of cameras. This is performed for each player and for each game in each of the plurality of football stadiums. A user or gamer is able to select a fantasy team by assembling players from real teams. When a fantasy player in a gamer's fantasy team scores a “down” (e.g., a touchdown, first down, etc.), as shown in images 41 and 42, a video snippet can be played to the gamer showing the fantasy player scoring a touchdown or at a first down. The video snippet is extracted from the images in the videos stored in the storage device in accordance to the position of the fantasy player on the fantasy field. The video snippet is a clip of video of finite duration. This video snippet can be extracted using the methods described in the above paragraphs because the plurality of cameras capturing the video have a known overlap of each of their FOV and the cameras are arranged to ensure an effective matrix configuration of the resulting content captured by the cameras. Therefore, as it can be appreciated from the above, various football games (NFL, or school games, etc.) can be made accessible at all times within a single application. Various videos can be delivered on-demand to fans with the ability for each fan to select a specific player in order to view a video of the player in action. This can appear to each fan as being in the director seat and controlling a virtual camera while having the freedom to virtually “tilt”, “pan” and/or “zoom in/out” the virtual camera to “point” the camera on a player of interest without the fan physically controlling any real camera in a football stadium. The term “virtual camera” is used herein to indicate that the camera is not real and the user is able to control the virtual camera as if it is a real moving camera (a camera that is able to zoom in/out, tilt and/or pan) to follow an object of interest to the user. This also provides an alternative streaming solution for a plurality (millions to hundreds of millions) of smart devices (tablets and smart phones) in the market. In addition, this allows casting alternative video streaming from the smart devices to televisions independent of traditional cable television subscriptions. This system also enables “speedwatch” functionality to fans to allow the fans to watch only their subjects of interests and on-demand.
  • FIGS. 39A-39D depict an example of a display device (e.g., a smart device) for viewing a custom on-demand video of an object of interest, according to an embodiment of the present invention. For example, when a fan purchases officially licensed team merchandize, credential information may be included with the merchandize to enable the fan to access a “StreamPlay” service to allow the fan to watch or view a custom video of a player of the licensed team. In order to access the “StreamPlay” service, the fan first downloads or opens a software application (APP) on a smart device, for example, as shown in FIG. 39A. The fan then validates the team that is associated with the merchandize that the fan purchased and loads the APP on the smart device, as shown in FIG. 39B. In an embodiment, during the loading of the APP on the smart device, team specific highlights can be played to the fan to attract or retain the attention of the fan while the APP loads. Various games played by the team associated with the purchased merchandize and validated by the fan are listed to the fan on the smart device. The fan then selects a specific game to watch, as shown in FIG. 39C. The user or fan is also provided with a graphical user interface (GUI) and functionality to control or customize his viewing experience, as shown in FIG. 39D. In an embodiment, the game begins in portrait mode while providing the option to switch to landscape mode if desired by pressing on a button in the GUI. In another embodiment, the switching of the viewing mode from portrait to landscape can simply be accomplished by turning the smart device by 90 deg., as is generally performed in any smart device.
  • FIGS. 40A-40D depict an example of interaction of a user on graphical user interface (GUI) providing functionality to access a custom video, according to an embodiment of the present invention. As discussed in the above paragraphs, a plurality of fixed-point cameras are mounted in various stadiums to capture videos from their specific field of views. The captured videos are transmitted to a video server. In an embodiment, the videos can be stitched together or provided with some overlap to allow a smooth transition from a view captured by one camera to a view captured by an adjacent camera. In an embodiment, digital audio of play-by-play of the game (home team and away team) is also captured and transmitted to an audio server (which can be the same or different from the video server). Game metadata is also generated during the live transmission of the video and audio data to the video server and audio server. The game metadata is detailed information about the game after completion of the game. For example, in an embodiment, individual player statistics and game statistics are included in the game metadata. For example, additional information can also be added by subject matter experts such as commentators. For example, trivia information and relationships of a specific aspect of the specific portion of a specific game in relation to similar situations in past games may also be included in the game metadata. For example, during the game, play-by-play information is added by the game administrators or game service providers (for example, NFL). This information is added in the form of game metadata. Such game metadata is presented during the recorded broadcast of the games and is made available to subscribers over the Internet. The videos and the audios along with the metadata are processed and exported or transmitted to individual user or fan devices (e.g., smart devices) 50 to display custom videos based on input selections from the user or fan, in accordance to the method described in the above paragraphs, as shown in FIG. 40A. In an embodiment, the user or fan can, for example, make appropriate selections and take appropriate actions using a graphical user interface (GUI) 60 incorporated within the received custom video 62. For example, the GUI 60 can include semi-transparent navigation controls 61 with functionality that minimizes viewing distraction and reducing clutter, as shown in FIG. 40A.
  • The functionality provided through semi-transparent controls 61 of the GUI 60 allows a use or fan to control various inputs including stop and playback and rewind to first downs, big play, scoring plays, etc, as shown in FIG. 40B. The control in the GUI 60 also allows a user to quickly navigate to home team or visitor team cameras, as well as listen to home team, visitor team or media (e.g., ESPN) radio broadcasts. The semi-transparent controls 61 of the GUI 60 display upon a user touches the display of the smart device 50 and fade out if no input or action is received by the GUI 60 after a certain period of time (few seconds to few minutes) has elapsed. The period of time can be set by the user or fan as desired. The system can select the best viewing angle for the user depending on the selection of the player made by the user. However, the user is also able to control the viewing angle by selecting an appropriate camera icon 63, as shown in FIG. 40C. The GUI 60 of the system also provides the functionality to send the game or the custom video being viewed by the user on the screen of the smart device 50 to a remote television by pressing on a button “Activate Mobile2TV” 64, as shown in FIG. 40D.
  • FIG. 41 depicts another configuration of the GUI that is displayed a display device (e.g., a smart device) 50 of a user, according to an embodiment of the present invention. In an embodiment, once the user or fan presses the button “Activate Mobile2TV” 64, as shown in FIG. 40D, the video streamed on the smart device 50 is sent to a television 70, as shown in FIG. 41. In an embodiment, as a result of sending the custom video from the smart device 50 to the television 70, the GUI 60 on the smart device changes its configuration to display instead football field configuration with a plurality of icons 65 representing the position of the plurality of cameras around the football stadium. Therefore, in an embodiment, the custom video is played on the television but the control functionality provided by the GUI stays on the smart device. This allows a remote interaction of the fan or user with the game as a whole. The user is able, for example, to select a player to display on the screen. The user is also able to select the camera by selecting (clicking or touching) the corresponding icon 66 on GUI 60 and thus be able to watch the video from that camera point of view. The user has enhanced camera controls and has complete control over virtual movement (tilt, zoom, pan, etc.) of a virtual camera. Furthermore, the user has greater control over forward and backward play via speedwatch functionality. As it can be appreciated, the user or fan plays the role of “a movie director” that has control over which view or camera to select. In an embodiment, the GUI 60 can further provide to the user or fan the functionality of taking screenshots or storing video clips and posting the video clips and/or the screen shots to social media. In an embodiment, while the user is interacting with the GUI 60 to control various features of custom video of the game, the custom video is still played on the television screen 70. In an embodiment, advertisement images, scoreboards and other graphics or writings (such as logos, badges, etc.) may be inserted by the system (computer server) into the custom video automatically or as controlled by the computer server to provide additional revenue generation. In an embodiment, the advertisement or other graphics may be tailored depending on the player or team selected by the user. In an embodiment, the insertion of the advertisement is not control by the user but tailored by the computer server according to the team selected by the user. For example, if the home team is the Washington Redskins®, an advertisement showing a particular restaurant in Washington, D.C. can be displayed in addition to the custom video of the football game selected by the user.
  • As it can be appreciated from the above paragraphs, dynamic video streams (DVS), referred in the above paragraphs as videos or images, are generated from multiple geo-referenced fixed video cameras. The video streams are custom generated according to inputs of the user such as a selection of a specific object or target of interest to be watched or followed. The DVS dynamically tracks the targets of interest from camera to camera based on target locations. The DVS provides at least one video stream for each object of interest using video(s) or images captured by one or more the plurality of cameras. Multiple targets of interest can be tracked simultaneously.
  • In an embodiment, the method or methods described above can be implemented as a series of instructions which can be executed by a computer, the computer having one or more processors or computer processor units (CPUs). As it can be appreciated, the term “computer” is used herein to encompass any type of computing system or device including a personal computer (e.g., a desktop computer, a laptop computer, a tablet, a smartphone, or any other handheld computing device), or a mainframe computer (e.g., an IBM mainframe), or a supercomputer (e.g., a CRAY computer), or a plurality of networked computers in a distributed computing environment.
  • For example, the method(s) may be implemented as a software program application which can be stored in a computer readable medium such as hard disks, CDROMs, optical disks, DVDs, magnetic optical disks, RAMs, EPROMs, EEPROMs, magnetic or optical cards, flash cards (e.g., a USB flash card), PCMCIA memory cards, smart cards, or other media.
  • Alternatively, a portion or the whole software program product can be downloaded from a remote computer or server via a network such as the internet, an ATM network, a wide area network (WAN) or a local area network.
  • Alternatively, instead or in addition to implementing the method as computer program product(s) (e.g., as software products) embodied in a computer, the method can be implemented as hardware in which for example an application specific integrated circuit (ASIC) can be designed to implement the method.
  • Various databases can be used which may be, include, or interface to, for example, an Oracle™ relational database sold commercially by Oracle Corporation. Other databases, such as Informix™, DB2 (Database 2) or other data storage, including file-based, or query formats, platforms, or resources such as OLAP (On Line Analytical Processing), SQL (Standard Query Language), a SAN (storage area network), Microsoft Access™ or others may also be used, incorporated, or accessed. The database may comprise one or more such databases that reside in one or more physical devices and in one or more physical locations. The database may store a plurality of types of data and/or files (e.g., images or video(s)) and associated data or file descriptions, administrative information, or any other data (e.g., metadata of the images or video(s)).
  • FIG. 35 depicts an example of a computer system (e.g., a computer server) for implementing one or more of the methods and systems of delivering or viewing a video of an area of interest centered on a physical location of an object being tracked, according to an embodiment of the present invention. FIG. 35 is a schematic diagram representing a computer system 100 for implementing the methods, according to an embodiment of the present invention. As shown in FIG. 35, computer system 100 comprises a computer processor unit (e.g., one or more computer processor units) 102 and a memory 104 in communication with the processor 102. The computer system 100 may further include an input device 106 for inputting data (such as keyboard, a mouse, a joystick, a game controller, a touchscreen, etc.) and an output device 108 such as a display device for displaying results of the computation (e.g., computer monitor, tablet, smartphone, head mounted device HMD, etc.). The computer system 100 may further include or be in communication with a storage device 110 for storing data such as, but not limited to, a hard-drive, a network attached storage (NAS) device, a storage area network (SAN), etc. It must be appreciated that the term computer processor unit or processor is used herein to encompass one or more computer processor units. Where reference is made to a processor or computer processor unit that term should be understood to encompass any of these computing arrangements.
  • Although the various steps of the method(s) are described in the above paragraphs as occurring in a certain order, the present application is not bound by the order in which the various steps occur. In fact, in alternative embodiments, the various steps can be executed in an order different from the order described above.
  • Although the invention has been described in detail for the purpose of illustration based on what is currently considered to be the most practical and preferred embodiments, it is to be understood that such detail is solely for that purpose and that the invention is not limited to the disclosed embodiments, but, on the contrary, is intended to cover modifications and equivalent arrangements that are within the spirit and scope of the appended claims. For example, it is to be understood that the present invention contemplates that, to the extent possible, one or more features of any embodiment can be combined with one or more features of any other embodiment.
  • Furthermore, since numerous modifications and changes will readily occur to those of skill in the art, it is not desired to limit the invention to the exact construction and operation described herein. Accordingly, all suitable modifications and equivalents should be considered as falling within the spirit and scope of the invention.

Claims (52)

What is claimed:
1. A method of selecting an area of interest from a plurality of images captured by a plurality of cameras, the method being implemented by a computer system that includes one or more processor units configured to execute computer program modules, the method comprising:
receiving, by the one or more processor units, a request for an area of interest from a plurality of images, the area of interest containing an object of interest selected by a user, the object of interest being captured in one or more images captured by one or more of the plurality of cameras, the plurality of cameras having overlapping field of views such that images captured by the plurality of cameras have a plurality of overlapping regions, the request containing an identity of the object of interest;
determining, by the one or more processor units, a camera within the plurality of cameras that wholly contains the area of interest using the physical location of the object of interest;
transforming, by the one or more processor units, the physical location of the object of interest into a pixel position within an image of a specific camera within the plurality of cameras such that the area of interest being essentially centered around the pixel position corresponding to the physical location of the object of interest, and is wholly contained within the image captured by the specific camera; and
extracting, by the one or more processor units, the area of interest substantially centered around the pixel position corresponding to the physical location of the object of interest.
2. The method according to claim 1, further comprising, wherein the request further comprises dimensions of an output device.
3. The method according to claim 1, wherein the request is received through a network using network protocol.
4. The method according to claim 1, wherein the plurality of cameras are static.
5. The method according to claim 1, wherein the plurality of cameras are substantially color-matched.
6. The method according to claim 1, wherein the plurality cameras are substantially synchronized in time.
7. The method according to claim 1, wherein the plurality of cameras are arranged in a matrix configuration.
8. The method according to claim 1, further comprising processing the extracted area of interest to correct for distortion.
9. The method according to claim 1, further comprising generating a video codestream containing the area of interest by encoding the extracted area of interest into a video codestream.
10. The method according to claim 9, further comprising storing the video codestream in memory.
11. The method according to claim 10, further comprising transmitting the video codestream to the client that made the request.
12. The method according to claim 9, further comprising storing the video codestream in a storage device.
13. The method according to claim 9, further comprising displaying the video codestream containing the area of interest on a display device.
14. The method according to claim 1, further comprising repeating determining a camera within the plurality of cameras that wholly contains the area of interest using the physical location of the object of interest; transforming the physical location of the object of interest into a pixel position within an image of a specific camera within the plurality of cameras such that the area of interest being essentially centered around the pixel position corresponding to the physical location of the object of interest, and is wholly contained within the image captured by the specific camera; and extracting the area of interest substantially centered around the pixel position corresponding to the physical location of the object of interest until the request is cancelled or the method is terminated.
15. The method according to claim 1, further comprising extracting a plurality of areas of interest from a plurality of images captured by the plurality of cameras.
16. The method according to claim 1, further comprising determining whether the object of interest is within a field of view of at least one camera in the plurality of cameras.
17. The method according to claim 1, further comprising determining whether the area of interest is contained within at least one overlapping region in the plurality of overlapping regions.
18. The method according to claim 1, wherein the object of interest moves from a field of view of one camera in the plurality of cameras to a field of view of another camera in the plurality of cameras.
19. The method according to claim 1, wherein each camera in the plurality of cameras captures a plurality of sequential images.
20. The method according to claim 19, wherein at least one of the plurality of sequential images contains the objects of interest.
21. The method according to claim 1, further comprising computing, by the one or more processor units, a physical location of the object of interest using data from a locator beacon associated with the object.
22. The method according to claim 21, wherein the locator beacon comprises a Global Positioning System (GPS) device, an Inertial Measurement Unit (IMU), an optical sensor, an image pattern, or a marking, or any combination of two or more thereof.
23. The method according to claim 1, wherein the object of interest is a football player, a soccer player, a baseball player, a car, a horse, a projectile.
24. A method of extracting an area of interest containing an object of interest from a plurality of images captured using a plurality of cameras, the method being implemented by a computer system that includes one or more processor units configured to execute computer program modules, the method comprising:
determining, by the one or more processor units, using the data of the locator or the physical location of the object of interest, a camera within the plurality of cameras that wholly contains the area of interest, the area of interest containing the object of interest, the object of interest being captured in one or more images captured by one or more of the plurality of cameras, the plurality of cameras having overlapping field of views such that images captured by the plurality of cameras have a plurality of overlapping regions;
transforming, by the one or more processor units, the physical location of the object of interest into a pixel position within an image of a specific camera within the plurality of cameras such that the area of interest is essentially centered around the pixel position corresponding to the physical location of the object of interest, and is wholly contained within the image captured by the specific camera; and
extracting, by the one or more processor units, the area of interest substantially centered around the pixel position corresponding to the physical location of the object of interest.
25. The method according to claim 24, further computing, by the one or more processor units, a physical location of the object of interest using data from a locator beacon associated with the object.
26. The method according to claim 25, wherein the locator beacon comprises a Global Positioning System (GPS) device, an Inertial Measurement Unit (IMU), an optical sensor, an image pattern, or a marking, or any combination of two or more thereof.
27. A computer system for selecting an area of interest from a plurality of images captured by a plurality of cameras, the system comprising one or more processor units configured to:
receive a request for an area of interest from a plurality of images, the area of interest containing an object of interest selected by a user, the object of interest being captured in one or more images captured by one or more of the plurality of cameras, the plurality of cameras having overlapping field of views such that images captured by the plurality of cameras have a plurality of overlapping regions, the request containing an identity of the object of interest;
determine a camera within the plurality of cameras that wholly contains the area of interest using the physical location of the object of interest;
transform the physical location of the object of interest into a pixel position within an image of a specific camera within the plurality of cameras such that the area of interest being essentially centered around the pixel position corresponding to the physical location of the object of interest, and is wholly contained within the image captured by the specific camera; and
extract the area of interest substantially centered around the pixel position corresponding to the physical location of the object of interest.
28. The system according to claim 27, further comprising, wherein the request further comprises dimensions of an output device.
29. The system according to claim 27, wherein the pone or more processor is configured to receive the request through a network using network protocol.
30. The system according to claim 27, wherein the plurality of cameras are static.
31. The system according to claim 27, wherein the plurality of cameras are substantially color-matched.
32. The system according to claim 27, wherein the plurality cameras are substantially synchronized in time.
33. The system according to claim 27, wherein the plurality of cameras are arranged in a matrix configuration.
34. The system according to claim 27, wherein the one or more processor units are configured to process the extracted area of interest to correct for distortion.
35. The system according to claim 27, wherein the one or more processor units are configured to generate a video codestream containing the area of interest by encoding the extracted area of interest into a video codestream.
36. The system according to claim 35, further comprising a memory configured to store the video codestream.
37. The system according to claim 36, wherein the one or more processor units are configured to transmit the video codestream to the client that made the request.
38. The system according to claim 35, further comprising a storage device configured to store the video codestream.
39. The system according to claim 35, wherein the one or more processor units are configured to display the video codestream containing the area of interest on a display device.
40. The system according to claim 27, further comprising repeating determining a camera within the plurality of cameras that wholly contains the area of interest using the physical location of the object of interest; transforming the physical location of the object of interest into a pixel position within an image of a specific camera within the plurality of cameras such that the area of interest being essentially centered around the pixel position corresponding to the physical location of the object of interest, and is wholly contained within the image captured by the specific camera; and extracting the area of interest substantially centered around the pixel position corresponding to the physical location of the object of interest until the request is cancelled or the method is terminated.
41. The system according to claim 27, wherein the one or more processor units are configured to extract a plurality of areas of interest from a plurality of images captured by the plurality of cameras.
42. The system according to claim 27, wherein the one or more processor units are configured to determine whether the object of interest is within a field of view of at least one camera in the plurality of cameras.
43. The system according to claim 27, wherein the one or more processor units are configured to determine whether the area of interest is contained within at least one overlapping region in the plurality of overlapping regions.
44. The system according to claim 27, wherein the object of interest moves from a field of view of one camera in the plurality of cameras to a field of view of another camera in the plurality of cameras.
45. The system according to claim 27, wherein each camera in the plurality of cameras captures a plurality of sequential images.
46. The system according to claim 45, wherein at least one of the plurality of sequential images contains the object of interest.
47. The system according to claim 27, wherein the one or more processor units are configured to compute a physical location of the object of interest using data from a locator beacon associated with the object.
48. The system according to claim 47, wherein the locator beacon comprises a Global Positioning System (GPS) device, an Inertial Measurement Unit (IMU), an optical sensor, an image pattern, or a marking, or any combination of two or more thereof.
49. The system according to claim 27, wherein the object of interest is a football player, a soccer player, a baseball player, a car, a horse, a projectile.
50. A computer system for extracting an area of interest containing an object of interest from a plurality of images captured using a plurality of cameras, the computer system comprising one or more processor units configured to:
determine using the data of the locator or the physical location of the object of interest, a camera within the plurality of cameras that wholly contains the area of interest, the area of interest containing the object of interest, the object of interest being captured in one or more images captured by one or more of the plurality of cameras, the plurality of cameras having overlapping field of views such that images captured by the plurality of cameras have a plurality of overlapping regions;
transform the physical location of the object of interest into a pixel position within an image of a specific camera within the plurality of cameras such that the area of interest is essentially centered around the pixel position corresponding to the physical location of the object of interest, and is wholly contained within the image captured by the specific camera; and
extracting the area of interest substantially centered around the pixel position corresponding to the physical location of the object of interest.
51. The system according to claim 50, wherein the one or more processor units are configured to compute a physical location of the object of interest using data from a locator beacon associated with the object.
52. The system according to claim 51, wherein the locator beacon comprises a Global Positioning System (GPS) device, an Inertial Measurement Unit (IMU), an optical sensor, an image pattern, or a marking, or any combination of two or more thereof.
US16/903,224 2015-04-30 2020-06-16 Method and system of selecting a view from a plurality of cameras Abandoned US20200310630A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US16/903,224 US20200310630A1 (en) 2015-04-30 2020-06-16 Method and system of selecting a view from a plurality of cameras

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US201562155005P 2015-04-30 2015-04-30
US15/142,720 US10725631B2 (en) 2015-04-30 2016-04-29 Systems and methods of selecting a view from a plurality of cameras
US16/903,224 US20200310630A1 (en) 2015-04-30 2020-06-16 Method and system of selecting a view from a plurality of cameras

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US15/142,720 Continuation US10725631B2 (en) 2015-04-30 2016-04-29 Systems and methods of selecting a view from a plurality of cameras

Publications (1)

Publication Number Publication Date
US20200310630A1 true US20200310630A1 (en) 2020-10-01

Family

ID=57204928

Family Applications (2)

Application Number Title Priority Date Filing Date
US15/142,720 Active 2036-09-12 US10725631B2 (en) 2015-04-30 2016-04-29 Systems and methods of selecting a view from a plurality of cameras
US16/903,224 Abandoned US20200310630A1 (en) 2015-04-30 2020-06-16 Method and system of selecting a view from a plurality of cameras

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US15/142,720 Active 2036-09-12 US10725631B2 (en) 2015-04-30 2016-04-29 Systems and methods of selecting a view from a plurality of cameras

Country Status (2)

Country Link
US (2) US10725631B2 (en)
EP (1) EP3101625A3 (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20220295139A1 (en) * 2021-03-11 2022-09-15 Quintar, Inc. Augmented reality system for viewing an event with multiple coordinate systems and automatically generated model
US11527047B2 (en) 2021-03-11 2022-12-13 Quintar, Inc. Augmented reality system for viewing an event with distributed computing
US11645819B2 (en) 2021-03-11 2023-05-09 Quintar, Inc. Augmented reality system for viewing an event with mode based on crowd sourced images
US11657578B2 (en) 2021-03-11 2023-05-23 Quintar, Inc. Registration for augmented reality system for viewing an event
US12028507B2 (en) 2021-03-11 2024-07-02 Quintar, Inc. Augmented reality system with remote presentation including 3D graphics extending beyond frame

Families Citing this family (29)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015069124A1 (en) * 2013-11-08 2015-05-14 Performance Lab Technologies Limited Automated prescription of activity based on physical activity data
US10600245B1 (en) 2014-05-28 2020-03-24 Lucasfilm Entertainment Company Ltd. Navigating a virtual environment of a media content item
KR20160024143A (en) * 2014-08-25 2016-03-04 삼성전자주식회사 Method and Electronic Device for image processing
WO2017024358A1 (en) * 2015-08-13 2017-02-16 Propeller Aerobotics Pty Ltd Integrated visual geo-referencing target unit and method of operation
WO2017038541A1 (en) * 2015-09-03 2017-03-09 ソニー株式会社 Video processing device, video processing method, and program
US10051263B2 (en) * 2015-11-23 2018-08-14 Center For Integrated Smart Sensors Foundation Multi-aperture camera system using scan line processing
US10248290B2 (en) * 2016-06-14 2019-04-02 Philip Galfond Fantasy sports simulation game system and method
US10795560B2 (en) * 2016-09-30 2020-10-06 Disney Enterprises, Inc. System and method for detection and visualization of anomalous media events
JP7086522B2 (en) * 2017-02-28 2022-06-20 キヤノン株式会社 Image processing equipment, information processing methods and programs
US10698068B2 (en) 2017-03-24 2020-06-30 Samsung Electronics Co., Ltd. System and method for synchronizing tracking points
WO2018217193A1 (en) 2017-05-24 2018-11-29 Google Llc Bayesian methodology for geospatial object/characteristic detection
US10586377B2 (en) 2017-05-31 2020-03-10 Verizon Patent And Licensing Inc. Methods and systems for generating virtual reality data that accounts for level of detail
US10311630B2 (en) * 2017-05-31 2019-06-04 Verizon Patent And Licensing Inc. Methods and systems for rendering frames of a virtual scene from different vantage points based on a virtual entity description frame of the virtual scene
WO2019040944A2 (en) 2017-08-25 2019-02-28 Agnetix, Inc. Fluid-cooled led-based lighting methods and apparatus for controlled environment agriculture
US10529074B2 (en) 2017-09-28 2020-01-07 Samsung Electronics Co., Ltd. Camera pose and plane estimation using active markers and a dynamic vision sensor
US10839547B2 (en) 2017-09-28 2020-11-17 Samsung Electronics Co., Ltd. Camera pose determination and tracking
US10417500B2 (en) 2017-12-28 2019-09-17 Disney Enterprises, Inc. System and method for automatic generation of sports media highlights
US10574975B1 (en) 2018-08-08 2020-02-25 At&T Intellectual Property I, L.P. Method and apparatus for navigating through panoramic content
CN111091025B (en) * 2018-10-23 2023-04-18 阿里巴巴集团控股有限公司 Image processing method, device and equipment
US10818077B2 (en) * 2018-12-14 2020-10-27 Canon Kabushiki Kaisha Method, system and apparatus for controlling a virtual camera
US10918945B2 (en) * 2019-03-15 2021-02-16 Sony Interactive Entertainment Inc. Methods and systems for spectating characters in follow-mode for virtual reality views
US11058950B2 (en) * 2019-03-15 2021-07-13 Sony Interactive Entertainment Inc. Methods and systems for spectating characters in virtual reality views
CN110263650B (en) * 2019-05-22 2022-02-22 北京奇艺世纪科技有限公司 Behavior class detection method and device, electronic equipment and computer readable medium
US11567788B1 (en) 2019-10-18 2023-01-31 Meta Platforms, Inc. Generating proactive reminders for assistant systems
US11861674B1 (en) 2019-10-18 2024-01-02 Meta Platforms Technologies, Llc Method, one or more computer-readable non-transitory storage media, and a system for generating comprehensive information for products of interest by assistant systems
KR20220132531A (en) * 2019-12-10 2022-09-30 아그네틱스, 인크. Multi-sensor imaging method and apparatus for controlled environment horticulture using irradiator and camera and/or sensor
EP4070009A1 (en) 2019-12-12 2022-10-12 Agnetix, Inc. Fluid-cooled led-based lighting fixture in close proximity grow systems for controlled environment horticulture
US11593951B2 (en) * 2020-02-25 2023-02-28 Qualcomm Incorporated Multi-device object tracking and localization
WO2024177746A1 (en) * 2023-01-14 2024-08-29 Radiusai, Inc. Point of sale station for assisted checkout system

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8896671B2 (en) 2010-04-09 2014-11-25 3D-4U, Inc. Apparatus and method for capturing images
US9615064B2 (en) * 2010-12-30 2017-04-04 Pelco, Inc. Tracking moving objects using a camera network
GB2489675A (en) * 2011-03-29 2012-10-10 Sony Corp Generating and viewing video highlights with field of view (FOV) information
FI20125280L (en) * 2012-03-14 2013-09-15 Mirasys Business Analytics Oy METHOD AND SYSTEM FOR DETERMINING THE BEHAVIOR OF A MOVING OBJECT
US9241103B2 (en) 2013-03-15 2016-01-19 Voke Inc. Apparatus and method for playback of multiple panoramic videos with control codes
KR102105189B1 (en) * 2013-10-31 2020-05-29 한국전자통신연구원 Apparatus and Method for Selecting Multi-Camera Dynamically to Track Interested Object
US9829001B2 (en) 2014-10-23 2017-11-28 Summit Esp, Llc Electric submersible pump assembly bearing

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20220295139A1 (en) * 2021-03-11 2022-09-15 Quintar, Inc. Augmented reality system for viewing an event with multiple coordinate systems and automatically generated model
US11527047B2 (en) 2021-03-11 2022-12-13 Quintar, Inc. Augmented reality system for viewing an event with distributed computing
US11645819B2 (en) 2021-03-11 2023-05-09 Quintar, Inc. Augmented reality system for viewing an event with mode based on crowd sourced images
US11657578B2 (en) 2021-03-11 2023-05-23 Quintar, Inc. Registration for augmented reality system for viewing an event
US11880953B2 (en) 2021-03-11 2024-01-23 Quintar, Inc. Augmented reality system for viewing an event with distributed computing
US12003806B2 (en) * 2021-03-11 2024-06-04 Quintar, Inc. Augmented reality system for viewing an event with multiple coordinate systems and automatically generated model
US12028507B2 (en) 2021-03-11 2024-07-02 Quintar, Inc. Augmented reality system with remote presentation including 3D graphics extending beyond frame

Also Published As

Publication number Publication date
US20160320951A1 (en) 2016-11-03
EP3101625A3 (en) 2016-12-14
EP3101625A2 (en) 2016-12-07
US10725631B2 (en) 2020-07-28

Similar Documents

Publication Publication Date Title
US20200310630A1 (en) Method and system of selecting a view from a plurality of cameras
US11283983B2 (en) System and method for providing virtual pan-tilt-zoom, PTZ, video functionality to a plurality of users over a data network
US11496814B2 (en) Method, system and computer program product for obtaining and displaying supplemental data about a displayed movie, show, event or video game
US10880522B2 (en) Hybrid media viewing application including a region of interest within a wide field of view
US10805592B2 (en) Apparatus and method for gaze tracking
US7782363B2 (en) Providing multiple video perspectives of activities through a data network to a remote multimedia server for selective display by remote viewing audiences
US9591336B2 (en) Camera feed distribution from event venue virtual seat cameras
US8970666B2 (en) Low scale production system and method
JP2019159950A (en) Information processing device and information processing method
US20160205341A1 (en) System and method for real-time processing of ultra-high resolution digital video
US20190253747A1 (en) Systems and methods for integrating and delivering objects of interest in video
JP2019505111A (en) Processing multiple media streams
EP3429706B1 (en) Shared experiences in panoramic video
US9930416B1 (en) Playback device that requests and receives a user tailored video stream using content of a primary stream and an enhanced stream
Hayes Olympic games broadcasting: Immerse yourself in the olympics this summer
Hayes Immerse yourself in the Olympics this summer [Olympic Games-broadcasting]

Legal Events

Date Code Title Description
AS Assignment

Owner name: PIXIA CORP., VIRGINIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:ERNST, RUDOLF O.;WHITE, JAMES;REEL/FRAME:052956/0301

Effective date: 20171227

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER

STPP Information on status: patent application and granting procedure in general

Free format text: FINAL REJECTION MAILED

AS Assignment

Owner name: BARCLAYS BANK PLC, NEW YORK

Free format text: FIRST LIEN SECURITY AGREEMENT;ASSIGNORS:CUBIC CORPORATION;PIXIA CORP.;NUVOTRONICS, INC.;REEL/FRAME:056393/0281

Effective date: 20210525

Owner name: ALTER DOMUS (US) LLC, ILLINOIS

Free format text: SECOND LIEN SECURITY AGREEMENT;ASSIGNORS:CUBIC CORPORATION;PIXIA CORP.;NUVOTRONICS, INC.;REEL/FRAME:056393/0314

Effective date: 20210525

STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE AFTER FINAL ACTION FORWARDED TO EXAMINER

STPP Information on status: patent application and granting procedure in general

Free format text: ADVISORY ACTION MAILED

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION