WO2020247646A1 - System and method for capturing and editing video from a plurality of cameras - Google Patents

System and method for capturing and editing video from a plurality of cameras Download PDF

Info

Publication number
WO2020247646A1
WO2020247646A1 PCT/US2020/036141 US2020036141W WO2020247646A1 WO 2020247646 A1 WO2020247646 A1 WO 2020247646A1 US 2020036141 W US2020036141 W US 2020036141W WO 2020247646 A1 WO2020247646 A1 WO 2020247646A1
Authority
WO
WIPO (PCT)
Prior art keywords
video
configurations
disclosure
endpoint
event
Prior art date
Application number
PCT/US2020/036141
Other languages
French (fr)
Inventor
Michael Van Steenburg
Original Assignee
Michael Van Steenburg
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Michael Van Steenburg filed Critical Michael Van Steenburg
Publication of WO2020247646A1 publication Critical patent/WO2020247646A1/en

Links

Classifications

    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B27/00Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
    • G11B27/02Editing, e.g. varying the order of information signals recorded on, or reproduced from, record carriers
    • G11B27/031Electronic editing of digitised analogue information signals, e.g. audio or video signals

Definitions

  • This disclosure is generally directed to video capture systems. More specifically, this disclosure is directed to a system and method for capturing and editing video from a plurality of cameras.
  • Video cameras are becoming more and more pervasive at events. iPhones and Android phones are increasingly - providing better and better quality video. At events, hundreds - if not thousands - of individual video cameras can be seen capturing footage. Each individual generally has no incentivization to share his or her footage with other at the event - especially amongst others the individual does not know.
  • the disclosure provides a system and method for capturing and editing video from a plurality of cameras.
  • phrases“at least one of,” when used with a list of items, means that different combinations of one or more of the listed items may be used, and only one item in the list may be needed.
  • “at least one of: A, B, and C” includes any of the following combinations: A; B; C; A and B; A and C; B and C; and A and B and C.
  • FIGURE 1 illustrates a high-level block diagram, according to an embodiment of the disclosure
  • FIGURE 2 show a simplified block diagram illustrative of a communication system that can be utilized to facilitate communication between endpoint(s) through a communication network, according to particular embodiments of the disclosure.
  • FIGURE 3 is an embodiment of a general- purpose computer that may be used in connection with other embodiments of the disclosure to carry out any of the above-referenced functions and/or serve as a computing device for endpoint(s).
  • FIGURES described below, and the various embodiments used to describe the principles of the present disclosure in this patent document are by way of illustration only and should not be construed in any way to limit the scope of the disclosure. Those skilled in the art will understand that the principles of the present disclosure invention may be implemented in any type of suitably arranged device or system. Additionally, the drawings are not necessarily drawn to scale.
  • the video footage shot from the multiple different devices/users gets uploaded to a remote computer system (e g., through a mobile app or a web interface) that utilizes software to intelligently compile and edit the video together from the different device/user perspectives to form a single cohesive composite video of the event.
  • Audio such as the common soundtrack recorded along with the video on all devices, is used as the video encoding/timing track (e.g., to synchronize the video).
  • One or more composite videos are then generated that can be shared back to the original contributors, other app users, event performer/hosts and the general public thru an ad-supported platform or a paid subscription ad- free platform.
  • Each particular contribution may have an associated unique identifier tied to the contributing device and/or user and/or both which is secured and stored accordingly with blockchain technology methods.
  • crowdsourced video editors (popularity and pay percentage based on user feedback) edit the videos and add their own fingerprint/ style to the composite video to give the finished work a professional DJ/VJ style human touch.
  • the editors, contributors, and event hosts/speakers/musicians receive residuals from advertising or subscription revenue based on their portion of the aggregate video contribution/portion time durations.
  • each vi deo contribution can be a blockchain itself along with the finished aggregated video as a composite blockchain so the authenticity of each segment is always verified and fake videos/images are eliminated.
  • watchers can also select the point of view they want to view the event from or allow the video to play as curated work of art.
  • a curated work of art may be created by one or more composite video directors/editors or may be automatically generated using artificial intelligent algorithms that take into consideration the quality of each video stream, GPS based location, lighting or event special effects and traditional event video editing styles.
  • more than one composite edit of a particular event may be available to view and a user can dynamically switch between such edits.
  • a number of minutes of viewed minutes (or seconds) of an edit may be stored for both reward and display purposes.
  • the public may choose to autopilot the most viewed edit.
  • the system may likewise keep track of the number of minutes (or seconds) for a particular user.
  • a particular view may be a combination of views that are used together to provide a single 3D virtual reality enhanced view.
  • the relative contribution can be considered for crediting a contributor.
  • the website/app can also have specific event categories like Sports, Music, Theater, Politics, Corporate, Crime, Vehicle Dash Cams, etc.
  • an application may be loaded onto a mobile device with video capture capabilities (e.g., an iPhone or Android-based phone).
  • an application may not be used to provide a video contribution (e.g., a GoPro camera uploading to a web-based system.
  • the system may receive both - applications upload and web-based system uploads.
  • the use of the common soundtrack may beneficially be used by pure video capture systems that do not otherwise have the ability to communicate with other devices (or capture geolocations).
  • a device may determine its location and report such information along with the video footage. Any suitable geolocation technique capabilities may be utilized, including GPS and cell-positioning techniques. In configurations in which devices are GPS enabled, the GPS-time may also be used as a primary or secondary time track
  • an enhanced geolocation of such devices with respect to one other may be determined and/or used to enhance capture techniques. For example, during capture, different devices may determine not only their location but also calculated distances from other devices (e.g., as may be determined by any suitable time of transmission/response technique or other suitable distance determination technique, which can use for example Bluetooth or WiFi). The system can use the reported geolocation and distances (for a more granular edit) to determine a switched perspective. For the overall benefit of capturing from multiple different devices, the system may also recommend to the user recording the video an alternate location to record the event from or an altered video capture mode that allows for a better quality/different output— for example, as may be determined from the close location of multiple devices.
  • ultrasonic tones or sounds may be used as a synchronizing mechanism.
  • the application loaded on a phone can listen for other ultrasonic tracks or initiates its own, for example, when one is not detected.
  • multiple different soundtracks may be played at different frequencies.
  • Each particular ultrasonic tone may bear a unique identifier associated with the user and/or device and/or both.
  • devices that don’t have a communication or geolocation capabilities can nonetheless capture the tones (e.g., emitted by other devices). These captured tones (in turn) can be used by the system for not only time-stamps but also geolocation calculations. For example, as to geolocation calculation, the system recognizes the unique identifier in one or more different ultrasonic emissions and places the camera in proximity. Where multiple different emitted ultrasonic tones from different devices are made, the capture of different tones may be used in a triangulation effort— using techniques similar to both GPS and cell-tower triangulation, but with ultrasonic emissions detected in audio (and recorded in the non-human perceptible audio track).
  • the same concept used in ultrasonic communications can be used in broadcasting infrared light that while undetectable to the human eye can be captured by a video device and used by the video system to discern communicated information (such as time-stamping and location).
  • attendees to an event contribute with content they capture with their devices.
  • Devices capable of geolocation and supporting an“app” or application e.g., most mobile phones) use geolocation to relate content from users that attend the same event.
  • editors human or artificial, will edit recordings and create a clean video that offers multiple camera views of the same event.
  • the viewer of the event can immerse himself in the event by selecting the view he wants.
  • the view is a 3-D playback generated from the multiple camera views of the event that are combined together.
  • a profit-share with the editors, attendees (and, also the artists) will be done based on advertising revenue or subscription revenues paid related to the video views.
  • the attendees/contributors with a higher score e.g., as may be measured by, but is not limited to, a higher quality of shared content
  • Yet other metrics may include time content that is actually viewed by watchers of the content, even if the entire video is not viewed completely.
  • the app/website may sell publicity as broadcast sponsorship (non-invasive, geo- located banners per viewer location).
  • broadcast sponsorship non-invasive, geo- located banners per viewer location.
  • the system may also use objective measures to provide editors with the recordings having the best video and sound recordings.
  • an auto-editing of video clips may be done using artificial intelligence that receives feedback from editors that have popular content.
  • the users of the app would be able to virtually attend a live event for a fixed price or basic subscription and will have access to professionally edited events or exclusive events for a premium subscription.
  • the app may also have a "TV schedule" of upcoming events that will allow users to see what's coming.
  • the schedule may be organized by genre, location, user favorites, user history and editor's accounts.
  • the artists themselves will be able to contribute with "behind the scenes" content, before, during and after the event.
  • the artists may consent for the app to locate 360 degree cams at the events.
  • FIGURES 2 and 3 describe non-limiting examples of communications and computers that may be utilized in conjunction with the concepts described reference to FIGURE 1
  • FIGURE 2 is a simplified block diagram illustrative of a communication system 200 that can be utilized to facilitate communication between endpoint(s) 210 and endpoint(s) 220 through a communication network 230, according to particular embodiments of the disclosure.
  • a communication system 200 that can be utilized to facilitate communication between endpoint(s) 210 and endpoint(s) 220 through a communication network 230, according to particular embodiments of the disclosure.
  • any of such communication may occur in the manner described below or other manners.
  • the endpoints may generally correspond to any two particular components described (or combination of component) with another component or combination of components.
  • endpoint may generally refer to any object, device, software, or any combination of the preceding that is generally operable to communicate with and/or send information to another endpoint.
  • the endpoint(s) may represent a user, which in turn may refer to a user profile representing a person.
  • the user profile may comprise, for example, a string of characters, a user name, a passcode, other user information, or any combination of the preceding.
  • the endpoint(s) may represent a device that comprises any hardware, software, firmware, or combination thereof operable to communicate through the communication network 230.
  • Examples of an endpoint(s) include, but are not necessarily limited to those devices describe herein, a computer or computers (including servers, applications servers, enterprise servers, desktop computers, laptops, netbooks, tablet computers (e.g., IPAD), a switch, mobile phones (e.g., including IPHONE and Android-based phones), networked televisions, networked watches, networked glasses, networked disc players, components in a cloud-computing network, or any other device or component of such device suitable for communicating information to and from the communication network 230.
  • Endpoints may support Internet Protocol (IP) or other suitable communication protocols.
  • IP Internet Protocol
  • endpoints may additionally include a medium access control (MAC) and a physical layer (PHY) interface that conforms to IEEE 801.11.
  • the device may have a device identifier such as the MAC address and may have a device profile that describes the device.
  • the endpoint may have a variety of applications or“apps” that can selectively communicate with certain other endpoints upon being activated.
  • the communication network 230 and links 215, 225 to the communication network 230 may include, but is not limited to, a public or private data network; a local area network (LAN); a metropolitan area network (MAN); a wide area network (WAN); a wireline or wireless network (WIFI, GSM, CDMA, LTE,WIMAX, BLUETOOTH or the like); a local, regional, or global communication network; portions of a cloud-computing network; a communication bus for components in a system; an optical network; a satellite network; an enterprise intranet; other suitable communication links; or any combination of the preceding. Yet additional methods of communications will become apparent to one of ordinary skill in the art after having read this specification.
  • information communicated between one endpoint and another may be communicated through a heterogeneous path using different types of communications. Additionally, certain information may travel from one endpoint to one or more intermediate endpoint before being relayed to a final endpoint. During such routing, select portions of the information may not be further routed. Additionally, an intermediate endpoint may add additional information.
  • endpoint generally appears as being in a single location, the endpoint(s) may be geographically dispersed, for example, in cloud computing scenarios. In such cloud computing scenarios, and endpoint may shift hardware during back-up.
  • endpoint may refer to each member of a set or each member of a subset of a set.
  • endpoint(s) 210, 230 may communicate with one another, any of a variety of security schemes scheme may be utilized.
  • endpoint(s) 220 may represent a client and endpoint(s) 230 may represent a server in client-server architecture.
  • the server and/or servers may host a website.
  • the website may have a registration process whereby the user establishes a username and password to authenticate or log in to the website.
  • the website may additionally utilize a web application for any particular application or feature that may need to be served up to website for use by the user.
  • FIGURE 3 is an embodiment of a general-purpose computer 310 that may be used in connection with other embodiments of the disclosure to carry out any of the above-referenced functions and/or serve as a computing device for endpoint(s) 310 and endpoint(s) 320.
  • the computer In executing the functions described above with reference to FIGURE 1, the computer is able to things it previously could not do.
  • General purpose computer 310 may generally be adapted to execute any of the known OS2, UNIX, Mac-OS, Linux, Android and/or Windows Operating Systems or other operating systems.
  • the general-purpose computer 310 in this embodiment includes a processor 312, random access memory (RAM) 314, a read only memory (ROM) 316, a mouse 318, a keyboard 320 and input/output devices such as a printer 324, disk drives 322, a display 326 and a communications link 328.
  • the general-purpose computer 310 may include more, less, or other component parts.
  • Embodiments of the present disclosure may include programs that may be stored in the RAM 314, the ROM 316 or the disk drives 322 and may be executed by the processor 312 in order to carry out functions described herein.
  • the communications link 328 may be connected to a computer network or a variety of other communicative platforms including, but not limited to, a public or private data network; a local area network (LAN); a metropolitan area network (MAN); a wide area network (WAN); a wireline or wireless network; a local, regional, or global communication network; an optical network; a satellite network; an enterprise intranet; other suitable communication links; or any combination of the preceding.
  • Disk drives 322 may include a variety of types of storage media such as, for example, floppy disk drives, hard disk drives, CD ROM drives, DVD ROM drives, magnetic tape drives or other suitable storage media. Although this embodiment employs a plurality of disk drives 322, a single disk drive 322 may be used without departing from the scope of the disclosure.
  • FIGURE 3 provides one embodiment of a computer that may be utilized with other embodiments of the disclosure, such other embodiments may additionally utilize computers other than general purpose computers as well as general purpose computers without conventional operating systems. Additionally, embodiments of the disclosure may also employ multiple general-purpose computers 310 or other computers networked together in a computer network.
  • the computers 310 may be servers or other types of computing devices. Most commonly, multiple general-purpose computers 310 or other computers may be networked through the Internet and/or in a client server network. Embodiments of the disclosure may also be used with a combination of separate computer networks each linked together by a private or a public network.
  • the logic includes computer software executable on the general-purpose computer 310.
  • the medium may include the RAM 314, the ROM 316, the disk drives 322, or other mediums.
  • the logic may be contained within hardware configuration or a combination of software and hardware configurations.
  • the logic may also be embedded within any other suitable medium without departing from the scope of the disclosure.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

The disclosure provides a system and method for capturing and editing video from a plurality of cameras.

Description

SYSTEM AND METHOD FOR CAPTURING AND EDITING
VIDEO FROM A PLURALITY OF CAMERAS
TECHNICAL FIELD
[0001] This disclosure is generally directed to video capture systems. More specifically, this disclosure is directed to a system and method for capturing and editing video from a plurality of cameras.
BACKGROUND
[0002] Video cameras are becoming more and more pervasive at events. iPhones and Android phones are increasingly - providing better and better quality video. At events, hundreds - if not thousands - of individual video cameras can be seen capturing footage. Each individual generally has no incentivization to share his or her footage with other at the event - especially amongst others the individual does not know.
SUMMARY OF THE DISCLOSURE
[0003] Given the short-comings describe herein, the disclosure provides a system and method for capturing and editing video from a plurality of cameras.
[0004] Before undertaking the DETAILED DESCRIPTION below, it may be advantageous to set forth definitions of certain words and phrases used throughout this patent document: the terms“include” and“comprise,” as well as derivatives thereof, mean inclusion without limitation; the term“or,” is inclusive, meaning and/or; the phrases“associated with” and “associated therewith,” as well as derivatives thereof, may mean to include, be included within, interconnect with, contain, be contained within, connect to or with, couple to or with, be communicable with, cooperate with, interleave, juxtapose, be proximate to, be bound to or with, have, have a property of, or the like. The phrase“at least one of,” when used with a list of items, means that different combinations of one or more of the listed items may be used, and only one item in the list may be needed. For example,“at least one of: A, B, and C” includes any of the following combinations: A; B; C; A and B; A and C; B and C; and A and B and C. Definitions for certain words and phrases are provided throughout this patent document, those of ordinary skill in the art should understand that in many if not most instances, such definitions apply to prior, as well as future uses of such defined words and phrases.
BRIEF DESCRIPTION OF THE DRAWINGS
[0005] For a more complete understanding of this disclosure and its features, reference is now made to the following description, taken in conjunction with the accompanying drawings, in which:
[0006] FIGURE 1 illustrates a high-level block diagram, according to an embodiment of the disclosure;
[0007] FIGURE 2 show a simplified block diagram illustrative of a communication system that can be utilized to facilitate communication between endpoint(s) through a communication network, according to particular embodiments of the disclosure; and
[0008] FIGURE 3 is an embodiment of a general- purpose computer that may be used in connection with other embodiments of the disclosure to carry out any of the above-referenced functions and/or serve as a computing device for endpoint(s).
DETAILED DESCRIPTION
[0009] The FIGURES described below, and the various embodiments used to describe the principles of the present disclosure in this patent document are by way of illustration only and should not be construed in any way to limit the scope of the disclosure. Those skilled in the art will understand that the principles of the present disclosure invention may be implemented in any type of suitably arranged device or system. Additionally, the drawings are not necessarily drawn to scale.
[0010] With reference to FIGURE 1, at a public event, the video footage shot from the multiple different devices/users gets uploaded to a remote computer system (e g., through a mobile app or a web interface) that utilizes software to intelligently compile and edit the video together from the different device/user perspectives to form a single cohesive composite video of the event. Audio, such as the common soundtrack recorded along with the video on all devices, is used as the video encoding/timing track (e.g., to synchronize the video). One or more composite videos are then generated that can be shared back to the original contributors, other app users, event performer/hosts and the general public thru an ad-supported platform or a paid subscription ad- free platform.
[0011] Each particular contribution may have an associated unique identifier tied to the contributing device and/or user and/or both which is secured and stored accordingly with blockchain technology methods.
[0012] In certain configurations, crowdsourced video editors (popularity and pay percentage based on user feedback) edit the videos and add their own fingerprint/ style to the composite video to give the finished work a professional DJ/VJ style human touch. [0013] When the video is shared with the public and attracts ads, the editors, contributors, and event hosts/speakers/musicians receive residuals from advertising or subscription revenue based on their portion of the aggregate video contribution/portion time durations.
[0014] According to particular configurations, each vi deo contribution can be a blockchain itself along with the finished aggregated video as a composite blockchain so the authenticity of each segment is always verified and fake videos/images are eliminated.
[0015] In particular configurations, watchers can also select the point of view they want to view the event from or allow the video to play as curated work of art. For example, a curated work of art may be created by one or more composite video directors/editors or may be automatically generated using artificial intelligent algorithms that take into consideration the quality of each video stream, GPS based location, lighting or event special effects and traditional event video editing styles.
[0016] In particular configurations, more than one composite edit of a particular event may be available to view and a user can dynamically switch between such edits. According to such configurations, a number of minutes of viewed minutes (or seconds) of an edit may be stored for both reward and display purposes. For example, the public may choose to autopilot the most viewed edit. Where multiple views are available, the system may likewise keep track of the number of minutes (or seconds) for a particular user.
[0017] In particular configurations, a particular view may be a combination of views that are used together to provide a single 3D virtual reality enhanced view. In such configurations, the relative contribution can be considered for crediting a contributor. [0018] The website/app can also have specific event categories like Sports, Music, Theater, Politics, Corporate, Crime, Vehicle Dash Cams, etc.
[0019] To allow capture and uploading, in certain configurations, an application may be loaded onto a mobile device with video capture capabilities (e.g., an iPhone or Android-based phone). In other configurations, an application may not be used to provide a video contribution (e.g., a GoPro camera uploading to a web-based system. In some configurations, the system may receive both - applications upload and web-based system uploads. The use of the common soundtrack may beneficially be used by pure video capture systems that do not otherwise have the ability to communicate with other devices (or capture geolocations).
[0020] In configurations where the devices have more enhanced capabilities, additional information can be gathered and/or utilized. For example, a device may determine its location and report such information along with the video footage. Any suitable geolocation technique capabilities may be utilized, including GPS and cell-positioning techniques. In configurations in which devices are GPS enabled, the GPS-time may also be used as a primary or secondary time track
[0021] In configurations where devices have the capability to communicate with one another, an enhanced geolocation of such devices with respect to one other may be determined and/or used to enhance capture techniques. For example, during capture, different devices may determine not only their location but also calculated distances from other devices (e.g., as may be determined by any suitable time of transmission/response technique or other suitable distance determination technique, which can use for example Bluetooth or WiFi). The system can use the reported geolocation and distances (for a more granular edit) to determine a switched perspective. For the overall benefit of capturing from multiple different devices, the system may also recommend to the user recording the video an alternate location to record the event from or an altered video capture mode that allows for a better quality/different output— for example, as may be determined from the close location of multiple devices.
[0022] While a soundtrack has been described in certain configurations as the relative timestamp amongst contributions, in other configurations ultrasonic tones or sounds (imperceptible to human ears, but capable of being picked up by a device with a microphone) may be used as a synchronizing mechanism. As a non-limiting example, at an event where a common human-perceptible soundtrack is not detected by the application through which the video is recorded, the application loaded on a phone can listen for other ultrasonic tracks or initiates its own, for example, when one is not detected. In particular configurations, multiple different soundtracks may be played at different frequencies. Each particular ultrasonic tone may bear a unique identifier associated with the user and/or device and/or both.
[0023] In configurations using such ultrasonic tones, devices that don’t have a communication or geolocation capabilities can nonetheless capture the tones (e.g., emitted by other devices). These captured tones (in turn) can be used by the system for not only time-stamps but also geolocation calculations. For example, as to geolocation calculation, the system recognizes the unique identifier in one or more different ultrasonic emissions and places the camera in proximity. Where multiple different emitted ultrasonic tones from different devices are made, the capture of different tones may be used in a triangulation effort— using techniques similar to both GPS and cell-tower triangulation, but with ultrasonic emissions detected in audio (and recorded in the non-human perceptible audio track). [0024] In an even more enhanced embodiment (e.g., where an attachment is used on the device) the same concept used in ultrasonic communications can be used in broadcasting infrared light that while undetectable to the human eye can be captured by a video device and used by the video system to discern communicated information (such as time-stamping and location).
[0025] As a recapitulation of certain features, attendees to an event contribute with content they capture with their devices. Devices capable of geolocation and supporting an“app” or application (e.g., most mobile phones) use geolocation to relate content from users that attend the same event.
[0026] According to particular configurations, editors, human or artificial, will edit recordings and create a clean video that offers multiple camera views of the same event. The viewer of the event can immerse himself in the event by selecting the view he wants. In certain configurations, the view is a 3-D playback generated from the multiple camera views of the event that are combined together.
[0027] According to particular configurations, a profit-share with the editors, attendees (and, also the artists) will be done based on advertising revenue or subscription revenues paid related to the video views. The attendees/contributors with a higher score (e.g., as may be measured by, but is not limited to, a higher quality of shared content), will be ranked and paid commensurate with their feedback ranking. Yet other metrics may include time content that is actually viewed by watchers of the content, even if the entire video is not viewed completely.
[0028] The app/website may sell publicity as broadcast sponsorship (non-invasive, geo- located banners per viewer location). To facilitate editing, the system may also use objective measures to provide editors with the recordings having the best video and sound recordings. In enhanced configurations, an auto-editing of video clips may be done using artificial intelligence that receives feedback from editors that have popular content.
[0029] In particular configurations, the users of the app would be able to virtually attend a live event for a fixed price or basic subscription and will have access to professionally edited events or exclusive events for a premium subscription.
[0030] The app, in particular configurations, may also have a "TV schedule" of upcoming events that will allow users to see what's coming. The schedule may be organized by genre, location, user favorites, user history and editor's accounts.
In addition to the contribution by attendees, the artists themselves will be able to contribute with "behind the scenes" content, before, during and after the event. The artists may consent for the app to locate 360 degree cams at the events.
[0031] While certain configurations described herein have been discussed as providing a viewing after the event, certain configurations may provide real-time or near real-time feedback (depending on the quality of the captures and communication bandwidth of the devices).
While general events have been described in certain configurations, in other configurations, individuals and businesses can share their security videos, personal video capture devices or vehicle cameras to a server and help people and police prevent crime or resolve disputes. If their blockchained videos are instrumental in catching a criminal, or resolving a dispute, then the contributors get to share in the reward offered by the police, family or insurance company involved. [0032] FIGURES 2 and 3 describe non-limiting examples of communications and computers that may be utilized in conjunction with the concepts described reference to FIGURE 1
[0033] FIGURE 2 is a simplified block diagram illustrative of a communication system 200 that can be utilized to facilitate communication between endpoint(s) 210 and endpoint(s) 220 through a communication network 230, according to particular embodiments of the disclosure. When referencing communication, for example, showing arrows or“clouds,” or“networks,” any of such communication may occur in the manner described below or other manners. Likewise, the endpoints may generally correspond to any two particular components described (or combination of component) with another component or combination of components.
[0034] As used herein,“endpoint” may generally refer to any object, device, software, or any combination of the preceding that is generally operable to communicate with and/or send information to another endpoint. In certain configurations, the endpoint(s) may represent a user, which in turn may refer to a user profile representing a person. The user profile may comprise, for example, a string of characters, a user name, a passcode, other user information, or any combination of the preceding. Additionally, the endpoint(s) may represent a device that comprises any hardware, software, firmware, or combination thereof operable to communicate through the communication network 230.
[0035] Examples of an endpoint(s) include, but are not necessarily limited to those devices describe herein, a computer or computers (including servers, applications servers, enterprise servers, desktop computers, laptops, netbooks, tablet computers (e.g., IPAD), a switch, mobile phones (e.g., including IPHONE and Android-based phones), networked televisions, networked watches, networked glasses, networked disc players, components in a cloud-computing network, or any other device or component of such device suitable for communicating information to and from the communication network 230. Endpoints may support Internet Protocol (IP) or other suitable communication protocols. In particular configurations, endpoints may additionally include a medium access control (MAC) and a physical layer (PHY) interface that conforms to IEEE 801.11. If the endpoint is a device, the device may have a device identifier such as the MAC address and may have a device profile that describes the device. In certain configurations, where the endpoint represents a device, such device may have a variety of applications or“apps” that can selectively communicate with certain other endpoints upon being activated.
[0036] The communication network 230 and links 215, 225 to the communication network 230 may include, but is not limited to, a public or private data network; a local area network (LAN); a metropolitan area network (MAN); a wide area network (WAN); a wireline or wireless network (WIFI, GSM, CDMA, LTE,WIMAX, BLUETOOTH or the like); a local, regional, or global communication network; portions of a cloud-computing network; a communication bus for components in a system; an optical network; a satellite network; an enterprise intranet; other suitable communication links; or any combination of the preceding. Yet additional methods of communications will become apparent to one of ordinary skill in the art after having read this specification. In particular configuration, information communicated between one endpoint and another may be communicated through a heterogeneous path using different types of communications. Additionally, certain information may travel from one endpoint to one or more intermediate endpoint before being relayed to a final endpoint. During such routing, select portions of the information may not be further routed. Additionally, an intermediate endpoint may add additional information.
[0037] Although endpoint generally appears as being in a single location, the endpoint(s) may be geographically dispersed, for example, in cloud computing scenarios. In such cloud computing scenarios, and endpoint may shift hardware during back-up. As used in this document, "each" may refer to each member of a set or each member of a subset of a set.
[0038] When the endpoints(s) 210, 230 communicate with one another, any of a variety of security schemes scheme may be utilized. As an example, in particular embodiments, endpoint(s) 220 may represent a client and endpoint(s) 230 may represent a server in client-server architecture. The server and/or servers may host a website. And, the website may have a registration process whereby the user establishes a username and password to authenticate or log in to the website. The website may additionally utilize a web application for any particular application or feature that may need to be served up to website for use by the user.
[0039] A variety of embodiments disclosed herein may avail from the above-referenced communication system or other communication systems.
[0040] FIGURE 3 is an embodiment of a general-purpose computer 310 that may be used in connection with other embodiments of the disclosure to carry out any of the above-referenced functions and/or serve as a computing device for endpoint(s) 310 and endpoint(s) 320. In executing the functions described above with reference to FIGURE 1, the computer is able to things it previously could not do.
[0041] General purpose computer 310 may generally be adapted to execute any of the known OS2, UNIX, Mac-OS, Linux, Android and/or Windows Operating Systems or other operating systems. The general-purpose computer 310 in this embodiment includes a processor 312, random access memory (RAM) 314, a read only memory (ROM) 316, a mouse 318, a keyboard 320 and input/output devices such as a printer 324, disk drives 322, a display 326 and a communications link 328. In other embodiments, the general-purpose computer 310 may include more, less, or other component parts. Embodiments of the present disclosure may include programs that may be stored in the RAM 314, the ROM 316 or the disk drives 322 and may be executed by the processor 312 in order to carry out functions described herein. The communications link 328 may be connected to a computer network or a variety of other communicative platforms including, but not limited to, a public or private data network; a local area network (LAN); a metropolitan area network (MAN); a wide area network (WAN); a wireline or wireless network; a local, regional, or global communication network; an optical network; a satellite network; an enterprise intranet; other suitable communication links; or any combination of the preceding. Disk drives 322 may include a variety of types of storage media such as, for example, floppy disk drives, hard disk drives, CD ROM drives, DVD ROM drives, magnetic tape drives or other suitable storage media. Although this embodiment employs a plurality of disk drives 322, a single disk drive 322 may be used without departing from the scope of the disclosure.
[0042] Although FIGURE 3 provides one embodiment of a computer that may be utilized with other embodiments of the disclosure, such other embodiments may additionally utilize computers other than general purpose computers as well as general purpose computers without conventional operating systems. Additionally, embodiments of the disclosure may also employ multiple general-purpose computers 310 or other computers networked together in a computer network. The computers 310 may be servers or other types of computing devices. Most commonly, multiple general-purpose computers 310 or other computers may be networked through the Internet and/or in a client server network. Embodiments of the disclosure may also be used with a combination of separate computer networks each linked together by a private or a public network.
[0043] Several embodiments of the disclosure may include logic contained within a medium. In the embodiment of FIGURE 3, the logic includes computer software executable on the general-purpose computer 310. The medium may include the RAM 314, the ROM 316, the disk drives 322, or other mediums. In other embodiments, the logic may be contained within hardware configuration or a combination of software and hardware configurations.
[0044] The logic may also be embedded within any other suitable medium without departing from the scope of the disclosure.
[0045] It will be understood that well-known processes have not been described in detail and have been omitted for brevity. Although specific steps, structures and materials may have been described, the present disclosure may not be limited to these specifics, and others may substitute as is well understood by those skilled in the art, and various steps may not necessarily be performed in the sequences shown.
[0046] While this disclosure has described certain embodiments and generally associated methods, alterations and permutations of these embodiments and methods will be apparent to those skilled in the art. Accordingly, the above description of example embodiments does not define or constrain this disclosure. Other changes, substitutions, and alterations are also possible without departing from the spirit and scope of this disclosure, as defined by the following claims.
[0047] It will be understood that well known processes have not been described in detail and have been omitted for brevity. Although specific steps, structures and materials may have been described, the present disclosure may not be limited to these specifics, and others may substitute as is well understood by those skilled in the art, and various steps may not necessarily be performed in the sequences shown.
[0048] While this disclosure has described certain embodiments and generally associated methods, alterations and permutations of these embodiments and methods will be apparent to those skilled in the art. Accordingly, the above description of example embodiments does not define or constrain this disclosure. Other changes, substitutions, and alterations are also possible without departing from the spirit and scope of this disclosure, as defined by the following claims.

Claims

WHAT IS CLAIMED IS:
1. The system shown and described.
PCT/US2020/036141 2019-06-04 2020-06-04 System and method for capturing and editing video from a plurality of cameras WO2020247646A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201962857239P 2019-06-04 2019-06-04
US62/857,239 2019-06-04

Publications (1)

Publication Number Publication Date
WO2020247646A1 true WO2020247646A1 (en) 2020-12-10

Family

ID=73652042

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2020/036141 WO2020247646A1 (en) 2019-06-04 2020-06-04 System and method for capturing and editing video from a plurality of cameras

Country Status (1)

Country Link
WO (1) WO2020247646A1 (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160048313A1 (en) * 2014-08-18 2016-02-18 KnowMe Systems, Inc. Scripted digital media message generation
US20160055884A1 (en) * 2011-06-03 2016-02-25 Michael Edward Zaletel Method and apparatus for dynamically recording, editing and combining multiple live video clips and still photographs into a finished compostion
US20160337548A1 (en) * 2015-05-14 2016-11-17 Calvin Osborn System and Method for Capturing and Sharing Content
US20180357483A1 (en) * 2008-11-17 2018-12-13 Liveclips Llc Method and system for segmenting and transmitting on-demand live-action video in real-time

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180357483A1 (en) * 2008-11-17 2018-12-13 Liveclips Llc Method and system for segmenting and transmitting on-demand live-action video in real-time
US20160055884A1 (en) * 2011-06-03 2016-02-25 Michael Edward Zaletel Method and apparatus for dynamically recording, editing and combining multiple live video clips and still photographs into a finished compostion
US20160048313A1 (en) * 2014-08-18 2016-02-18 KnowMe Systems, Inc. Scripted digital media message generation
US20160337548A1 (en) * 2015-05-14 2016-11-17 Calvin Osborn System and Method for Capturing and Sharing Content

Similar Documents

Publication Publication Date Title
US11330316B2 (en) Media streaming
US12022143B2 (en) Digital jukebox device with karaoke and/or photo booth features, and associated methods
US11862198B2 (en) Synthesizing a presentation from multiple media clips
US20140086562A1 (en) Method And Apparatus For Creating A Composite Video From Multiple Sources
US20170134783A1 (en) High quality video sharing systems
EP3384678B1 (en) Network-based event recording
US20150020086A1 (en) Systems and methods for obtaining user feedback to media content
US20140258405A1 (en) Interactive Digital Content Sharing Among Users
US20100138480A1 (en) Method and system for providing content over a network
US20130041755A1 (en) Method and apparatus for managing advertisement content and personal content
US20150279424A1 (en) Sound quality of the audio portion of audio/video files recorded during a live event
WO2020247646A1 (en) System and method for capturing and editing video from a plurality of cameras
US20220036328A1 (en) Activating Monetization for Uncompensated Online Content Creators
JP2008512820A (en) System and method for portable publishing system for audio and video
US11490148B2 (en) Systems and methods to determine when to rejoin a live media broadcast after an interruption
US20240095313A1 (en) User-Controllable AV-Artwork Steaming Data Structure with Conjunctive Configurable NFTs and Landscape-Portrait Coding
US20240196031A1 (en) Live Voice and Media Publishing and Distribution Platform
US20140376891A1 (en) System for providing an environment in which performers generate corresponding performances
Shrestha Automatic mashup generation of multiple-camera videos
Poulakos et al. Future media internet in 2017: vision, use cases and scenarios
US9378207B2 (en) Methods and apparatus for multimedia creation
TW201902231A (en) Video and audio uploading and playing system combined with landscape information allowing an user to view the corresponding push notification information according to the user's location or the designated location for achieving the effect of advertisement or entertainment
Matthews The Making of television Pilot
KR20150138587A (en) Media File Sharing Mehtod, and Managing Server Used Therein
KR20070105600A (en) Distributing method and system of contents

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20817808

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20817808

Country of ref document: EP

Kind code of ref document: A1