US20170237896A1 - System and Method for Preserving Video Clips from a Handheld Device - Google Patents

System and Method for Preserving Video Clips from a Handheld Device Download PDF

Info

Publication number
US20170237896A1
US20170237896A1 US15/390,652 US201615390652A US2017237896A1 US 20170237896 A1 US20170237896 A1 US 20170237896A1 US 201615390652 A US201615390652 A US 201615390652A US 2017237896 A1 US2017237896 A1 US 2017237896A1
Authority
US
United States
Prior art keywords
video
buffer
processor
frames
user command
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
US15/390,652
Other versions
US9900497B2 (en
Inventor
Albert Tsai
David Yip
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fast Model Technology LLC
Fastmodel Holdings LLC
Original Assignee
Fast Model Technology LLC
Fastmodel Holdings LLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fast Model Technology LLC, Fastmodel Holdings LLC filed Critical Fast Model Technology LLC
Priority to US15/390,652 priority Critical patent/US9900497B2/en
Assigned to FAST MODEL TECHNOLOGY LLC reassignment FAST MODEL TECHNOLOGY LLC ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: TSAI, Albert, YIP, DAVID
Publication of US20170237896A1 publication Critical patent/US20170237896A1/en
Assigned to FASTMODEL HOLDINGS LLC reassignment FASTMODEL HOLDINGS LLC ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: YIP, DAVID, TSAI, Albert
Assigned to FASTMODEL HOLDINGS LLC reassignment FASTMODEL HOLDINGS LLC ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: FASTMODEL TECHNOLOGY LLC
Application granted granted Critical
Publication of US9900497B2 publication Critical patent/US9900497B2/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • H04N5/23216
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/76Television signal recording
    • H04N5/765Interface circuits between an apparatus for recording and another apparatus
    • H04N5/77Interface circuits between an apparatus for recording and another apparatus between a recording apparatus and a television camera
    • H04N5/772Interface circuits between an apparatus for recording and another apparatus between a recording apparatus and a television camera the recording apparatus and the television camera being placed in the same enclosure
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/62Control of parameters via user interfaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/167Audio in a user interface, e.g. using voice commands for navigating, audio feedback
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72519
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/32Circuits or arrangements for control or supervision between transmitter and receiver or between image input and image output device, e.g. between a still-image camera and its memory or between a still-image camera and a printer device
    • H04N1/32358Circuits or arrangements for control or supervision between transmitter and receiver or between image input and image output device, e.g. between a still-image camera and its memory or between a still-image camera and a printer device using picture signal storage, e.g. at transmitter
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/183Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a single remote source
    • H04N7/185Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a single remote source from a mobile camera, e.g. for remote control
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/79Processing of colour television signals in connection with recording
    • H04N9/80Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback
    • H04N9/82Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback the individual colour picture signal components being recorded simultaneously only
    • H04N9/8205Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback the individual colour picture signal components being recorded simultaneously only involving the multiplexing of an additional signal and the colour video signal
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M2250/00Details of telephonic subscriber devices
    • H04M2250/22Details of telephonic subscriber devices including a touch pad, a touch sensor or a touch detector
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M2250/00Details of telephonic subscriber devices
    • H04M2250/52Details of telephonic subscriber devices including functional features of a camera

Definitions

  • the present invention relates generally to video and more particularly to a system and method where a user can create a video clip of reasonable size from a handheld device using gestures, voice commands or virtual buttons.
  • the difficulty with using current mobile video technology is that it requires a combination of good luck (right place, right time), patience (actively recording for large periods of time hoping something will happen) and skill (actually completing the capture and then editing it down to a manageably-sized video file that can consumed by an audience) to capture the “moments” everybody is interested in. More often than not, the best moments get missed—either because the user cannot initiate a capture in time, or because finding and extracting the relevant content out of a very large recording is too cumbersome and difficult.
  • the present invention relates to a reactive method for recording video that combines video capture, touch-screen and voice-control technologies into an integrated system that produces cleanly edited, short-duration, compliant video files that exactly capture a moment after it has actually occurred
  • the present invention maintains the device in a ready state that is always ready to capture video up to N minutes in the past (where N depends on available system memory). This enables the user to run the system indefinitely without having to worry about running out of storage. Touch gestures and voice commands actually initiate captures without having to actually monitor the system itself thereby allowing for complete focus on the live action itself.
  • the user sees something happen he or she can use an appropriate voice or touch command to cause the system to create a video media file based on time points derived from the user's commands:
  • FIG. 1 shows a schematic diagram of an embodiment of the invention.
  • FIG. 2 shows a block diagram of the embodiment of FIG. 1 .
  • FIGS. 3A-3D show buffer data structures used to perform captures.
  • FIG. 4 shows a fixed camera embodiment of the invention.
  • FIG. 5 shows a wearable camera embodiment of the invention.
  • the present invention relates to a system and method for capturing video moments on a smartphone or other handheld device. It is not limited to handheld devices and will work with any electronic device or computer having a camera.
  • the invention works by allowing the video camera to run while buffering the incoming video stream in a system of buffers.
  • a user indicates by touch gesture or voice command when to start a clip.
  • the clip is typically preset to a certain number of seconds or minutes.
  • the invention can then receive metadata concerning the clip and create one of various types of standard video output files.
  • the present invention rather than relying on a planned or predictive approach to video capture uses a reactive method for recording video that combines video capture, touch-screen, voice-control technologies into an integrated system that produces cleanly edited, short-duration, compliant video files that exactly captures a moment after it has actually happened.
  • the present invention places the device in a state that is always ready to capture video up to N seconds or minutes in the past (where N depends on available system memory and can be set by the user). This enables the user to run the system indefinitely without having to worry about running out of storage.
  • the user relies on touch gestures and voice commands to actually initiate captures without having to actually monitor the system itself, thereby allowing for complete focus on the live action itself.
  • touch gestures and voice commands to actually initiate captures without having to actually monitor the system itself, thereby allowing for complete focus on the live action itself.
  • voice or touch command can be used to cause the system create a video media file based on time points derived from the user's commands.
  • the resulting video is automatically trimmed and tagged with metadata (time, location and user-defined tags) and immediately available for consumption (e.g. playback, social sharing).
  • the present invention uses a rolling buffer set including a main buffer and a pre-roll buffer to provide unlimited operation against a finite amount of memory.
  • both the main and pre-roll buffers are allocated in storage, typically disk storage or its equivalent, and left in a write-ready state.
  • the system interacts with the video camera to store the video stream into the main buffer first.
  • the pre-roll buffer is designed to hold only a few seconds of video, just enough time for the main buffer to be closed and re-opened for writing (i.e. purged). If there are any pieces of the main buffer that need to be preserved based on a requested clipping action, those pieces are excised and preserved before the purge operation.
  • the main buffer is empty and ready, the system redirects the video stream back to the main buffer again without losing a frame thereby allowing the pre-roll to be flushed and be made ready again.
  • the rolling buffer is not flushed until the next time the main buffer reaches capacity, if a user triggers a capture at the start of the main buffer and it has to reach back into the rolling buffer, it can.
  • the captures are generally small enough that they can be stitched back together very quickly before adding the resulting video clip to the library. It is important that pieces of buffers are kept until it is certain they are not needed as part of a capture before they are flushed and lost forever. This is done with reference counting that which is taking place at all times to ensure that any piece of buffered video that might be needed for a future capture is saved off prior to flushing.
  • the present invention uses a physically-backed buffer approach because the uncompressed video streams usually require too much memory to cover any significant amount of time, especially on a mobile device where battery life, cost and memory are at a premium.
  • Both the touch-screen and microphone are listening in parallel for input that will establish the start and end points needed to reconstruct a separate clip from the main buffer and possibly pre-roll buffer.
  • FIG. 1 a schematic diagram of an embodiment of the present invention can be seen.
  • a user holds his smartphone or other handheld device 1 and the application (App) allows video to stream.
  • the scene 3 is continually streamed into storage in the device 1 .
  • the scene 3 can also be seen on the touch screen 2 .
  • the user realizes that an event has occurred that he or she desires to capture, either the virtual button 4 is touched, or a voice command is given through the microphone 5 .
  • a clip of N seconds (or minutes) is then extracted from the stored video and converted to a file for further use.
  • the number N can be preset using a supplied setup menu.
  • FIG. 2 shows a block diagram of the embodiment.
  • a processor 20 which typically resides in the handheld device 1 can be any type of processor capable of executing stored instructions.
  • the processor 20 is electrically connected to storage 22 which can be any type of memory device including internal RAM, disk, flash or plug-in storage, or any other type of storage. Part of the storage 20 contains the executable instructions for downloading and running the App.
  • the processor 20 is also electrically connected to various input/output ports 25 that allow a file 30 to be transferred. These include plug-in Universal Serial Bus ports 33 and other ports, WiFi access 31 and 3G-5G or greater cellular telephone access 32 .
  • the processor 20 is also electrically connected to a microphone 26 and a touch screen 23 .
  • a video camera 28 supplies electrical signals representing streamed video to the processor 20 .
  • the processor 20 executes stored instructions that read the touch screen 23 for touch commands 27 and interpret voice commands 26 from the microphone 24 .
  • the user upon seeing that a capture is desired, either touches a virtual button on the touch screen 23 executing a touch command 27 , or issues a voice command 26 through the microphone 24 .
  • This causes the App's executing instructions to save a portion of the video that is N seconds (or minutes) previous to the command.
  • the App creates one of many different types of video files known in the art including wave files, flash video files, MPEG files and other standard files.
  • FIGS. 3A-3D show an embodiment of a buffering and counting arrangement that can be used with the present invention.
  • a main buffer 30 typically receives incoming streaming video from the camera 21 .
  • a pre-roll buffer 31 remains empty until the main buffer has reached capacity. Video input than switches to the rolling buffer while the main buffer is purged.
  • FIG. 3A shows the situation where a user has placed a capture or clip mark 32 at a certain point in time in the main buffer.
  • the main buffer contains the entire N seconds of the clip.
  • the video in the clip region is removed from the main buffer and separately processed into an output file.
  • FIG. 3B shows the case where the main buffer is purging, and the entire clip region resides in the rolling buffer. This case is rare since typically, the rolling buffer is much smaller than the main buffer. In this case, the clip is removed from the rolling buffer for processing before the rolling buffer is over-written.
  • FIG. 3C shows the case where the end of the clip (the clip mark) is in the main buffer; however, the beginning of the clip is in the pre-roll buffer.
  • the two parts of the clip must be stitched together to form an entire clip. This is done before the pre-roll buffer over-writes and before the main buffer is again purged.
  • FIG. 3D shows the opposite case where the clip mark representing the end of the clip is in the pre-roll buffer and the beginning of the clip is in the main buffer.
  • both parts of the clip must be removed from the buffers and stitched together. This must be done quickly before the pre-roll buffer over-writes. Also, there must be enough time to purge the main buffer.
  • the main buffer is typically purged by closing it and re-opening it using operating system commands under an Application Programming Interface (API).
  • API Application Programming Interface
  • the present invention can be used with any computing device, fixed or handheld.
  • the preferred device is a smartphone with a camera and touch screen.
  • the operating system can be any that supports streaming video such as IOS by Apple, Android which is open-source, Windows CE and Windows Phone 8, or any other operating system.
  • the present invention uses a reactive method for recording video that combines video capture, remote-control and voice-control technologies into an integrated system that produce cleanly edited, short-duration, compliant video files that exactly captures a moment after it has actually happened.
  • the approach puts the fixed camera in a ready state that is always ready to capture video up to N minutes in the past (depending on available system memory). This enables the user to stream and acquire video from the camera indefinitely without having to worry about running out of storage.
  • the user relies on small controls on the fixed device itself or voice commands to actually initiate capture of a moment.
  • voice commands When the user senses something is about to happen (or even after the moment has happened), they can use the appropriate voice or touch command to have the system create a video media file based on time points derived from the user's commands:
  • the resulting video is automatically trimmed, tagged with meta-data (time, location and user-defined tags) and immediately available for consumption (e.g. playback, social sharing).
  • FIG. 4 shows a sketch of this embodiment.
  • a fixed camera 20 continuously photographs a scene 3 .
  • the camera has a lens 21 had usually a view screen 22 .
  • the camera can also have a microphone and one or more external controls 23 .
  • the camera can receive either voice commands from the microphone, or commands via one of the controls 23 to save a video clip. Meta-data can be entered by voice from the microphone.
  • the present system uses a rolling buffer approach (comprised of a main and a pre-roll buffer) to provide unlimited operation against a finite amount of memory.
  • a rolling buffer approach (comprised of a main and a pre-roll buffer) to provide unlimited operation against a finite amount of memory.
  • both the main and pre-roll buffer are allocated on disk and left in a write-ready state.
  • the system interacts with the video camera to lay the video stream against the main buffer first.
  • the pre-roll buffer is designed to hold only a few seconds, just enough time for the main buffer to be closed and re-opened for writing. If there are any pieces of the main buffer that need to be preserved based on a requested clipping action, those pieces are excised and preserved.
  • the main buffer is ready, the system redirects the video stream back to the main buffer (thereby allowing the pre-roll to be flushed and be made ready again).
  • the system takes a physically-backed buffer approach because the video streams require too much memory to store uncompressed in order to cover any significant amount of time, especially on a small fixed camera where power consumption and cost of memory are concerns.
  • on-board processors are “listening” or “watching” for command input that will be used to establish the start and end points needed to reconstruct a separate clip from the main (and possibly pre-roll buffer)
  • Wearable video cameras that provide a user with the ability to easily acquire first-person point of view video can be extremely useful. Video acquired from wearables today are great for documenting and archiving memories. These types of videos are also tremendously useful from a teaching and training perspective.
  • these wearable cameras work by either recording an entire session (e.g. a bike ride) to local storage on the wearable itself or by streaming a raw continuous video feed to a nearby connected device that can store the video feed itself.
  • an entire session e.g. a bike ride
  • streaming a raw continuous video feed to a nearby connected device that can store the video feed itself.
  • video footage acquired via these wearable cameras is that the user almost always ends up with too much video.
  • Over the course of an entire streamed session there are typically only a few “moments” that are of any actual value.
  • approaches to extracting these moments from the video session e.g. manual post-editing, heuristic-based post video analysis
  • existing solutions are unacceptable because they require enough storage to store the whole stream and they introduce a necessary delay between when a moment happens and when it can be first analyzed, watched or distributed.
  • the present invention uses a reactive method for recording video that combines video capture, remote-control, gesture-recognition and voice-control technologies into an integrated system that produce cleanly edited, short-duration, compliant video files that exactly captures a moment after it has actually happened.
  • the approach puts the camera in a ready state that is always ready to capture video up to N minutes in the past (depending on available system memory). This enables the user to stream and acquire video from the camera indefinitely without having to worry about running out of storage.
  • the user relies on gestures recognized by the camera and voice commands to actually initiate capture of a moment, or a button on the camera.
  • voice commands to actually initiate capture of a moment, or a button on the camera.
  • they can use the appropriate voice or touch command to have the system create a video media file based on time points derived from the user's commands:
  • the resulting video is automatically trimmed, tagged with meta-data (time, location and user-defined tags) and immediately available for consumption (e.g. playback, social sharing).
  • FIG. 5 shows a sketch of this embodiment.
  • a user 30 wears a camera 31 on clothing or otherwise.
  • the camera 31 continuously records a scene 3 .
  • a voice command or a gesture 31 to the camera causes the camera to save the video clip.
  • meta-data can be entered by voice through the microphone.
  • the system uses a rolling buffer approach (comprised of a main and a pre-roll buffer) to provide unlimited operation against a finite amount of memory.
  • a rolling buffer approach (comprised of a main and a pre-roll buffer) to provide unlimited operation against a finite amount of memory.
  • both the main and pre-roll buffer are allocated on disk and left in a write-ready state.
  • the system interacts with the video camera to lay the video stream against the main buffer first.
  • the pre-roll buffer is designed to hold only a few seconds, just enough time for the main buffer to be closed and re-opened for writing. If there are any pieces of the main buffer that need to be preserved based on a requested clipping action, those pieces are excised and preserved.
  • the main buffer is ready, the system redirects the video stream back to the main buffer (thereby allowing the pre-roll to be flushed and be made ready again).
  • the system takes a physically-backed buffer approach because the video streams require too much memory to store uncompressed in order to cover any significant amount of time, especially on a small wearable camera where power consumption and cost of memory are concerns.
  • on-board processors are “listening” or “watching” for command input that will be used to establish the start and end points needed to reconstruct a separate clip from the main (and possibly pre-roll buffer).
  • a more general system is a fixed, handheld, smartphone or wearable video capture system including a processor and memory executing stored executable instructions from the memory.
  • the instructions are configured to allow the processor to interact with a video camera, a button or a touch screen or a microphone on a fixed or wearable device.
  • the stored instructions cause incoming streaming video from the video camera to be stored sequentially in a rolling video buffer, the rolling buffer including K fixed-length buffers in the memory P1, . . .
  • PK starting with buffer P1, where K is a positive integer
  • the executable instructions are configured to allow the processor receive a user command from the button or the touch screen or the microphone at a particular time to save video frames from N seconds of previous video streaming where N is a positive integer less than or equal to L, and to copy these frames to a video output file.
  • the processor locates the address in the fixed-length buffers where a video frame is stored that begins at time N seconds before the particular time of the user command, and locates a second address in the fixed-length buffers representing a time when the user command was issued. The processor then copies all frames between the first address and the second address to the video output file.

Abstract

A system and method for recording video that combines video capture, touch-screen and voice-control technologies into an integrated system that produces cleanly edited, short-duration, compliant video files that exactly capture a moment after it has actually occurred The present invention maintains the device in a ready state that is always ready to capture video up to N seconds or minutes in the past (where N depends on available system memory). This enables the user to run the system indefinitely without having to worry about running out of storage. Touch gestures and voice commands actually initiate captures without having to actually monitor the system itself thereby allowing for complete focus on the live action itself. When the user sees something happen, he or she can use an appropriate voice or touch command to cause the system to create a video media file based on time points derived from the user's commands.

Description

  • This is a continuation-in-part of application Ser. No. 15/148,788 filed May 6, 2016 which claimed priority to U.S. Provisional patent application No. 62/158,835 filed May 8, 2015. This application also claims priority from U.S. Provisional patent application No. 62/387,350 filed Dec. 26, 2015. Applications Ser. No. 15/148,788, Ser. No. 62/158,835 and Ser. No. 62/387,350 are hereby incorporated by reference in their entireties.
  • FIELD OF THE INVENTION
  • The present invention relates generally to video and more particularly to a system and method where a user can create a video clip of reasonable size from a handheld device using gestures, voice commands or virtual buttons.
  • DESCRIPTION OF THE PROBLEM
  • The proliferation of mobile devices outfitted with on-board video cameras has caused many people to become a part-time videographers trying to capture on video the next viral video that will provide somebody their fifteen minutes of fame.
  • The difficulty with using current mobile video technology is that it requires a combination of good luck (right place, right time), patience (actively recording for large periods of time hoping something will happen) and skill (actually completing the capture and then editing it down to a manageably-sized video file that can consumed by an audience) to capture the “moments” everybody is interested in. More often than not, the best moments get missed—either because the user cannot initiate a capture in time, or because finding and extracting the relevant content out of a very large recording is too cumbersome and difficult.
  • SUMMARY OF THE INVENTION
  • Rather than relying on a planned (i.e. staged) or predictive (i.e. anticipating when something will happening and initiating a recording session) approach to video capture, the present invention relates to a reactive method for recording video that combines video capture, touch-screen and voice-control technologies into an integrated system that produces cleanly edited, short-duration, compliant video files that exactly capture a moment after it has actually occurred The present invention maintains the device in a ready state that is always ready to capture video up to N minutes in the past (where N depends on available system memory). This enables the user to run the system indefinitely without having to worry about running out of storage. Touch gestures and voice commands actually initiate captures without having to actually monitor the system itself thereby allowing for complete focus on the live action itself. When the user sees something happen, he or she can use an appropriate voice or touch command to cause the system to create a video media file based on time points derived from the user's commands:
  • DESCRIPTION OF THE FIGURES
  • The following drawings illustrate features of the present invention:
  • FIG. 1 shows a schematic diagram of an embodiment of the invention.
  • FIG. 2 shows a block diagram of the embodiment of FIG. 1.
  • FIGS. 3A-3D show buffer data structures used to perform captures.
  • FIG. 4 shows a fixed camera embodiment of the invention.
  • FIG. 5 shows a wearable camera embodiment of the invention.
  • Several drawings and illustrations have been presented to aid in understanding the present invention. The scope of the present invention is not limited to what is shown in the figures.
  • DESCRIPTION OF THE PREFERRED EMBODIMENTS
  • The present invention relates to a system and method for capturing video moments on a smartphone or other handheld device. It is not limited to handheld devices and will work with any electronic device or computer having a camera. The invention works by allowing the video camera to run while buffering the incoming video stream in a system of buffers. A user indicates by touch gesture or voice command when to start a clip. The clip is typically preset to a certain number of seconds or minutes. The invention can then receive metadata concerning the clip and create one of various types of standard video output files.
  • The present invention, rather than relying on a planned or predictive approach to video capture uses a reactive method for recording video that combines video capture, touch-screen, voice-control technologies into an integrated system that produces cleanly edited, short-duration, compliant video files that exactly captures a moment after it has actually happened.
  • The present invention places the device in a state that is always ready to capture video up to N seconds or minutes in the past (where N depends on available system memory and can be set by the user). This enables the user to run the system indefinitely without having to worry about running out of storage.
  • The user relies on touch gestures and voice commands to actually initiate captures without having to actually monitor the system itself, thereby allowing for complete focus on the live action itself. When the user sees something happen, he or she can use the appropriate voice or touch command to cause the system create a video media file based on time points derived from the user's commands.
  • For example:
    • Tap Gesture: create a video file going back N seconds (user configurable).
    • Press and Hold Gesture: create a video file going back N seconds and ending when the user releases the hold gesture.
    • Voice Command: attach a meta-data tag to video clip for automated classification or create a video file going back N seconds.
  • The resulting video is automatically trimmed and tagged with metadata (time, location and user-defined tags) and immediately available for consumption (e.g. playback, social sharing).
  • Operation
  • The present invention uses a rolling buffer set including a main buffer and a pre-roll buffer to provide unlimited operation against a finite amount of memory. On startup, both the main and pre-roll buffers are allocated in storage, typically disk storage or its equivalent, and left in a write-ready state. The system interacts with the video camera to store the video stream into the main buffer first. As the main buffer approaches capacity, the system switches to the pre-roll buffer without dropping video frames. The pre-roll buffer is designed to hold only a few seconds of video, just enough time for the main buffer to be closed and re-opened for writing (i.e. purged). If there are any pieces of the main buffer that need to be preserved based on a requested clipping action, those pieces are excised and preserved before the purge operation. When the main buffer is empty and ready, the system redirects the video stream back to the main buffer again without losing a frame thereby allowing the pre-roll to be flushed and be made ready again.
  • Because the rolling buffer is not flushed until the next time the main buffer reaches capacity, if a user triggers a capture at the start of the main buffer and it has to reach back into the rolling buffer, it can. The captures are generally small enough that they can be stitched back together very quickly before adding the resulting video clip to the library. It is important that pieces of buffers are kept until it is certain they are not needed as part of a capture before they are flushed and lost forever. This is done with reference counting that which is taking place at all times to ensure that any piece of buffered video that might be needed for a future capture is saved off prior to flushing.
  • The present invention uses a physically-backed buffer approach because the uncompressed video streams usually require too much memory to cover any significant amount of time, especially on a mobile device where battery life, cost and memory are at a premium.
  • Both the touch-screen and microphone are listening in parallel for input that will establish the start and end points needed to reconstruct a separate clip from the main buffer and possibly pre-roll buffer.
  • Turning to FIG. 1, a schematic diagram of an embodiment of the present invention can be seen. A user holds his smartphone or other handheld device 1 and the application (App) allows video to stream. The scene 3 is continually streamed into storage in the device 1. The scene 3 can also be seen on the touch screen 2. When the user realizes that an event has occurred that he or she desires to capture, either the virtual button 4 is touched, or a voice command is given through the microphone 5. A clip of N seconds (or minutes) is then extracted from the stored video and converted to a file for further use. The number N can be preset using a supplied setup menu.
  • FIG. 2 shows a block diagram of the embodiment. A processor 20 which typically resides in the handheld device 1 can be any type of processor capable of executing stored instructions. The processor 20 is electrically connected to storage 22 which can be any type of memory device including internal RAM, disk, flash or plug-in storage, or any other type of storage. Part of the storage 20 contains the executable instructions for downloading and running the App. The processor 20 is also electrically connected to various input/output ports 25 that allow a file 30 to be transferred. These include plug-in Universal Serial Bus ports 33 and other ports, WiFi access 31 and 3G-5G or greater cellular telephone access 32. The processor 20 is also electrically connected to a microphone 26 and a touch screen 23. A video camera 28 supplies electrical signals representing streamed video to the processor 20. The processor 20 executes stored instructions that read the touch screen 23 for touch commands 27 and interpret voice commands 26 from the microphone 24.
  • The user, upon seeing that a capture is desired, either touches a virtual button on the touch screen 23 executing a touch command 27, or issues a voice command 26 through the microphone 24. This causes the App's executing instructions to save a portion of the video that is N seconds (or minutes) previous to the command. The App. creates one of many different types of video files known in the art including wave files, flash video files, MPEG files and other standard files.
  • FIGS. 3A-3D show an embodiment of a buffering and counting arrangement that can be used with the present invention. A main buffer 30 typically receives incoming streaming video from the camera 21. A pre-roll buffer 31 remains empty until the main buffer has reached capacity. Video input than switches to the rolling buffer while the main buffer is purged.
  • FIG. 3A shows the situation where a user has placed a capture or clip mark 32 at a certain point in time in the main buffer. In this case, the main buffer contains the entire N seconds of the clip. The video in the clip region is removed from the main buffer and separately processed into an output file.
  • FIG. 3B shows the case where the main buffer is purging, and the entire clip region resides in the rolling buffer. This case is rare since typically, the rolling buffer is much smaller than the main buffer. In this case, the clip is removed from the rolling buffer for processing before the rolling buffer is over-written.
  • FIG. 3C shows the case where the end of the clip (the clip mark) is in the main buffer; however, the beginning of the clip is in the pre-roll buffer. Here the two parts of the clip must be stitched together to form an entire clip. This is done before the pre-roll buffer over-writes and before the main buffer is again purged.
  • FIG. 3D shows the opposite case where the clip mark representing the end of the clip is in the pre-roll buffer and the beginning of the clip is in the main buffer. Here, again both parts of the clip must be removed from the buffers and stitched together. This must be done quickly before the pre-roll buffer over-writes. Also, there must be enough time to purge the main buffer. The main buffer is typically purged by closing it and re-opening it using operating system commands under an Application Programming Interface (API).
  • The present invention can be used with any computing device, fixed or handheld. The preferred device is a smartphone with a camera and touch screen. The operating system can be any that supports streaming video such as IOS by Apple, Android which is open-source, Windows CE and Windows Phone 8, or any other operating system.
  • Fixed Camera Based Version Problem:
  • Advances in technology coupled with falling hardware costs has led to the availability of small fixed video cameras that can be used to capture and stream video for a variety of purposes. These “always-on”, “always-recording” devices can be used in different domains to provide “hands-free” vantage points to a user that would otherwise be impossible to acquire if the user had to both direct and record an activity himself.
  • The general problem with video footage acquired via these streamed cameras is that the user almost always ends up with too much video. Over the course of an entire streamed session, there are typically only a few “moments” that are of any actual value. While there are various approaches to extracting these moments from the video session (e.g. manual post-editing, heuristic-based post video analysis), existing solutions are unacceptable because they require enough storage to store the whole stream and they introduce a necessary delay between when a moment happens and when it can be first analyzed, watched or distributed.
  • Solution:
  • Rather than store the entire streamed session, the present invention uses a reactive method for recording video that combines video capture, remote-control and voice-control technologies into an integrated system that produce cleanly edited, short-duration, compliant video files that exactly captures a moment after it has actually happened.
  • The approach puts the fixed camera in a ready state that is always ready to capture video up to N minutes in the past (depending on available system memory). This enables the user to stream and acquire video from the camera indefinitely without having to worry about running out of storage.
  • The user relies on small controls on the fixed device itself or voice commands to actually initiate capture of a moment. When the user senses something is about to happen (or even after the moment has happened), they can use the appropriate voice or touch command to have the system create a video media file based on time points derived from the user's commands:
      • Event Command (via direct control or voice control): create a video file going back N seconds (user configurable)
      • Continuing Event Command (via direct control or voice control): create a video file going back N seconds and ending when the user releases the hold gesture
      • Voice Command: attach meta-data tag to video clip for automated classification
  • The resulting video is automatically trimmed, tagged with meta-data (time, location and user-defined tags) and immediately available for consumption (e.g. playback, social sharing).
  • FIG. 4 shows a sketch of this embodiment. A fixed camera 20 continuously photographs a scene 3. The camera has a lens 21 had usually a view screen 22. The camera can also have a microphone and one or more external controls 23. The camera can receive either voice commands from the microphone, or commands via one of the controls 23 to save a video clip. Meta-data can be entered by voice from the microphone.
  • Approach:
  • The present system uses a rolling buffer approach (comprised of a main and a pre-roll buffer) to provide unlimited operation against a finite amount of memory. On startup, both the main and pre-roll buffer are allocated on disk and left in a write-ready state. The system interacts with the video camera to lay the video stream against the main buffer first. As the main buffer approaches capacity, the system switches to the pre-roll buffer without dropping video frames. The pre-roll buffer is designed to hold only a few seconds, just enough time for the main buffer to be closed and re-opened for writing. If there are any pieces of the main buffer that need to be preserved based on a requested clipping action, those pieces are excised and preserved. When the main buffer is ready, the system redirects the video stream back to the main buffer (thereby allowing the pre-roll to be flushed and be made ready again).
  • The system takes a physically-backed buffer approach because the video streams require too much memory to store uncompressed in order to cover any significant amount of time, especially on a small fixed camera where power consumption and cost of memory are concerns.
  • There is reference counting occurring at all times to ensure that any piece of buffered video that might be needed for a “future” clip is saved off prior to flushing.
  • In parallel, on-board processors are “listening” or “watching” for command input that will be used to establish the start and end points needed to reconstruct a separate clip from the main (and possibly pre-roll buffer)
  • Wearable Based Version Problem:
  • Wearable video cameras that provide a user with the ability to easily acquire first-person point of view video can be extremely useful. Video acquired from wearables today are great for documenting and archiving memories. These types of videos are also tremendously useful from a teaching and training perspective.
  • In general, these wearable cameras work by either recording an entire session (e.g. a bike ride) to local storage on the wearable itself or by streaming a raw continuous video feed to a nearby connected device that can store the video feed itself. Unfortunately, the general problem with video footage acquired via these wearable cameras (whether stored locally or on a connected device) is that the user almost always ends up with too much video. Over the course of an entire streamed session, there are typically only a few “moments” that are of any actual value. While there are various approaches to extracting these moments from the video session (e.g. manual post-editing, heuristic-based post video analysis), existing solutions are unacceptable because they require enough storage to store the whole stream and they introduce a necessary delay between when a moment happens and when it can be first analyzed, watched or distributed.
  • Solution:
  • Rather than store the entire streamed session, the present invention uses a reactive method for recording video that combines video capture, remote-control, gesture-recognition and voice-control technologies into an integrated system that produce cleanly edited, short-duration, compliant video files that exactly captures a moment after it has actually happened.
  • The approach puts the camera in a ready state that is always ready to capture video up to N minutes in the past (depending on available system memory). This enables the user to stream and acquire video from the camera indefinitely without having to worry about running out of storage.
  • The user relies on gestures recognized by the camera and voice commands to actually initiate capture of a moment, or a button on the camera. When the user senses something is about to happen (or even after the moment has happened), they can use the appropriate voice or touch command to have the system create a video media file based on time points derived from the user's commands:
      • Event Command (via gesture, voice or remote control): create a video file going back N seconds (user configurable)
      • Continuing Event Command (via gesture, voice or remote control): create a video file going back N seconds and ending when the user releases the hold gesture
      • Voice Command: attach meta-data tag to video clip for automated classification
  • The resulting video is automatically trimmed, tagged with meta-data (time, location and user-defined tags) and immediately available for consumption (e.g. playback, social sharing).
  • FIG. 5 shows a sketch of this embodiment. A user 30 wears a camera 31 on clothing or otherwise. The camera 31 continuously records a scene 3. A voice command or a gesture 31 to the camera causes the camera to save the video clip. meta-data can be entered by voice through the microphone.
  • Approach:
  • The system uses a rolling buffer approach (comprised of a main and a pre-roll buffer) to provide unlimited operation against a finite amount of memory. On startup, both the main and pre-roll buffer are allocated on disk and left in a write-ready state. The system interacts with the video camera to lay the video stream against the main buffer first. As the main buffer approaches capacity, the system switches to the pre-roll buffer without dropping video frames. The pre-roll buffer is designed to hold only a few seconds, just enough time for the main buffer to be closed and re-opened for writing. If there are any pieces of the main buffer that need to be preserved based on a requested clipping action, those pieces are excised and preserved. When the main buffer is ready, the system redirects the video stream back to the main buffer (thereby allowing the pre-roll to be flushed and be made ready again).
  • The system takes a physically-backed buffer approach because the video streams require too much memory to store uncompressed in order to cover any significant amount of time, especially on a small wearable camera where power consumption and cost of memory are concerns.
  • There is reference counting occurring at all times to ensure that any piece of buffered video that might be needed for a “future” clip is saved off prior to flushing.
  • In parallel, on-board processors are “listening” or “watching” for command input that will be used to establish the start and end points needed to reconstruct a separate clip from the main (and possibly pre-roll buffer).
  • A more general system is a fixed, handheld, smartphone or wearable video capture system including a processor and memory executing stored executable instructions from the memory. The instructions are configured to allow the processor to interact with a video camera, a button or a touch screen or a microphone on a fixed or wearable device. The stored instructions cause incoming streaming video from the video camera to be stored sequentially in a rolling video buffer, the rolling buffer including K fixed-length buffers in the memory P1, . . . PK starting with buffer P1, where K is a positive integer, and when any buffer Pn of the fixed-length buffers is filled, the processor begins to store incoming streaming video into buffer Pn+1, unless Pn=PK, in which case, the processor begins to store incoming streaming video into buffer P1, and wherein, the processor continually cycles through the plurality of buffers storing L seconds of incoming video, where L is a positive integer.
  • The executable instructions are configured to allow the processor receive a user command from the button or the touch screen or the microphone at a particular time to save video frames from N seconds of previous video streaming where N is a positive integer less than or equal to L, and to copy these frames to a video output file.
  • The processor locates the address in the fixed-length buffers where a video frame is stored that begins at time N seconds before the particular time of the user command, and locates a second address in the fixed-length buffers representing a time when the user command was issued. The processor then copies all frames between the first address and the second address to the video output file.
  • Several descriptions and illustrations have been presented to aid in understanding the present invention. One with skill in the art will realize that numerous changes and variations may be made without departing from the spirit of the invention. Each of these changes and variations is within the scope of the present invention.

Claims (17)

We claim:
1. A fixed or wearable video capture system including a processor and memory executing stored executable instructions from the memory, the instructions configured to allow the processor to interact with a video camera, a button or a touch screen or a microphone on a fixed or wearable device, the stored computer instructions causing incoming streaming video from the video camera to be stored sequentially in a rolling video buffer, the rolling buffer including K fixed-length buffers in the memory P1, . . . PK starting with buffer P1, where K is a positive integer, and wherein, when any buffer Pn of the fixed-length buffers is filled, the processor begins to store incoming streaming video into buffer Pn+1, unless Pn=PK, in which case, the processor begins to store incoming streaming video into buffer P1, and wherein, the processor continually cycles through the plurality of buffers storing L seconds of incoming video, where L is a positive integer;
the executable instructions configured to allow the processor receive a user command from the button or the touch screen or the microphone at a particular time to save video frames from N seconds of previous video streaming where N is a positive integer less than or equal to L, and to copy these frames to a video output file;
the processor locating a first address in the plurality of fixed-length buffers where a video frame is stored that begins at time N seconds before the particular time of the user command, and locating a second address in the fixed-length buffers representing a time when the user command was issued, the processor then copying all frames between the first address and the second address to the video output file.
2. The video capture system of claim 1 wherein the user command is actuated by touching the touch screen.
3. The video capture system of claim 1 wherein the user command is actuated by a button on said fixed or wearable device.
4. The video capture system of claim 1 wherein the user command is a spoken command into said microphone.
5. The video capture system of claim 1 wherein the user command is a hold and release touch to the touch screen, wherein the system only begins to save frames from N seconds before when the user holds a portion of the touch screen and stops saving frames when the user releases the portion of the touch screen.
6. The video capture system of claim 1 wherein K=2, and the rolling video buffer includes a main buffer P1 and a pre-roll buffer P2, wherein the main buffer P1 stores frames until a predetermined capacity is reached whereupon frames are stored in the pre-roll buffer P2 while the main buffer P1 is purged.
7. The video capture system of claim 6 wherein if there is any data in the main buffer to be preserved based on the user command, that data is excised from the main buffer and separately stored before the main buffer is purged.
8. The video capture system of claim 7 wherein if a first saved frame is in the pre-roll buffer and a last saved frame is in the main buffer, all saved frames between the first saved frame and the last saved frame are stitched together to form a saved clip, the stitching being performed before the pre-roll buffer over-writes and before the main buffer is purged.
9. The video capture system of claim 1 wherein the video output file is transmitted over a network to a remote computing device.
10. The video capture system of claim 1 wherein streaming video and user command is transmitted over a network to a remote location, the processor, memory, rolling buffers being located at the remote location.
11. An N second video capture system comprising:
a handheld, fixed or wearable device that includes:
a video camera;
a microphone or button;
the device also having a processor executing instructions stored in a memory that recognize a user command from the button or the microphone, and upon receiving the user command copy streaming video frames from the camera into a video output file starting N seconds before the user command, where N is a positive integer;
the system storing streaming video frames into a main buffer and a pre-roll buffer in the memory, the rolling video buffer including a main buffer and a pre-roll buffer, wherein the main buffer stores frames until a predetermined capacity is reached whereupon frames are stored in the pre-roll buffer while the main buffer is purged;
and wherein, if a first saved frame is in the pre-roll buffer and a last saved frame is in the main buffer, all saved frames between the first saved frame and the last saved frame are stitched together to form a saved clip, the stitching being performed before the pre-roll buffer over-writes and before the main buffer is purged, the saved clip being written into a video output file.
12. The video capture system of claim 11 wherein the user command is actuated by pressing the button.
13. The video capture system of claim 11 wherein the user command is a spoken command into the microphone.
14. The video capture system of claim 11 wherein the user command is a hold and release touch to the touch screen, wherein the system begins to save frames from N seconds before when the user holds the button and stops saving frames when the user releases the button.
15. A video capture system for a smartphone having a processor, memory, a video camera, a touchscreen and a microphone, the system comprising instructions stored in the memory that execute on the processor causing the video camera to stream video frame by frame into a rolling buffer in the memory;
the rolling buffer including K fixed-length buffers in the memory P1, . . . PK starting with buffer P1, where K is a positive integer, and wherein, when any buffer Pn of the fixed-length buffers is filled, the processor begins to store incoming streaming video into buffer Pn+1, unless Pn=PK, in which case, the processor begins to store incoming streaming video into buffer P1, and wherein, the processor continually cycles through the plurality of buffers storing L seconds of incoming video, where L is a positive integer;
the executable instructions configured to allow the processor receive a user command from the touch screen or the microphone at a particular time to save video frames from N seconds of previous video streaming where N is a positive integer less than or equal to L, and to copy these frames to a video output file;
the processor locating a first address in the plurality of fixed-length buffers where a video frame is stored that begins at time N seconds before the particular time of the user command, and locating a second address in the fixed-length buffers representing a time when the user command was issued, the processor then copying all frames between the first address and the second address to the video output file.
16. The video capture system of claim 16 wherein the user command is a touch to a virtual button on the touch screen.
17. The video capture system of claim 16 wherein the user command is a tap gesture on the touch screen.
US15/390,652 2015-05-08 2016-12-26 System and method for preserving video clips from a handheld device Expired - Fee Related US9900497B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US15/390,652 US9900497B2 (en) 2015-05-08 2016-12-26 System and method for preserving video clips from a handheld device

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US201562158835P 2015-05-08 2015-05-08
US201562387350P 2015-12-26 2015-12-26
US15/148,788 US9848120B2 (en) 2015-05-08 2016-05-06 System and method for preserving video clips from a handheld device
US15/390,652 US9900497B2 (en) 2015-05-08 2016-12-26 System and method for preserving video clips from a handheld device

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US15/148,788 Continuation US9848120B2 (en) 2015-05-08 2016-05-06 System and method for preserving video clips from a handheld device

Publications (2)

Publication Number Publication Date
US20170237896A1 true US20170237896A1 (en) 2017-08-17
US9900497B2 US9900497B2 (en) 2018-02-20

Family

ID=57324940

Family Applications (2)

Application Number Title Priority Date Filing Date
US15/148,788 Expired - Fee Related US9848120B2 (en) 2015-05-08 2016-05-06 System and method for preserving video clips from a handheld device
US15/390,652 Expired - Fee Related US9900497B2 (en) 2015-05-08 2016-12-26 System and method for preserving video clips from a handheld device

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US15/148,788 Expired - Fee Related US9848120B2 (en) 2015-05-08 2016-05-06 System and method for preserving video clips from a handheld device

Country Status (1)

Country Link
US (2) US9848120B2 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170358322A1 (en) * 2007-03-07 2017-12-14 Operem, Llc Method and apparatus for initiating a live video stream transmission
CN111464862A (en) * 2020-04-24 2020-07-28 张咏 Video screenshot method based on voice recognition and image processing
US20220004773A1 (en) * 2020-07-06 2022-01-06 Electronics And Telecommunications Research Institute Apparatus for training recognition model, apparatus for analyzing video, and apparatus for providing video search service

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9848120B2 (en) * 2015-05-08 2017-12-19 Fast Model Technology Llc System and method for preserving video clips from a handheld device
US10999503B2 (en) 2017-07-12 2021-05-04 Amazon Technologies, Inc. Pre-roll image capture implemented by a power limited security device
US10834357B2 (en) 2018-03-05 2020-11-10 Hindsight Technologies, Llc Continuous video capture glasses
JP7164465B2 (en) * 2019-02-21 2022-11-01 i-PRO株式会社 wearable camera

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160093335A1 (en) * 2014-09-30 2016-03-31 Apple Inc. Time-Lapse Video Capture With Temporal Points Of Interest
US20160180886A1 (en) * 2014-12-19 2016-06-23 Facebook, Inc. Systems and methods for combining drawing and videos prior to buffer storage
US20160344924A1 (en) * 2015-05-08 2016-11-24 Albert Tsai System and Method for Preserving Video Clips from a Handheld Device

Family Cites Families (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5886739A (en) * 1993-11-01 1999-03-23 Winningstad; C. Norman Portable automatic tracking video recording system
JP4186293B2 (en) * 1999-02-10 2008-11-26 株式会社ニコン Electronic camera
JP4564810B2 (en) * 2004-09-14 2010-10-20 キヤノン株式会社 Imaging device
US7757049B2 (en) * 2006-11-17 2010-07-13 Oracle America, Inc. Method and system for file access using a shared memory
JP5495602B2 (en) * 2009-04-02 2014-05-21 オリンパスイメージング株式会社 Imaging apparatus and imaging method
JP5481493B2 (en) * 2009-11-11 2014-04-23 パナソニック株式会社 ACCESS DEVICE, INFORMATION RECORDING DEVICE, CONTROLLER, REAL TIME INFORMATION RECORDING SYSTEM, ACCESS METHOD, AND PROGRAM
JP5526727B2 (en) * 2009-11-20 2014-06-18 ソニー株式会社 Image processing apparatus, image processing method, and program
KR101692398B1 (en) * 2010-08-30 2017-01-03 삼성전자주식회사 Digital photographing apparatus and control method thereof
CA2810991C (en) * 2010-09-09 2016-06-21 Nec Corporation Storage system
US8442265B1 (en) * 2011-10-19 2013-05-14 Facebook Inc. Image selection from captured video sequence based on social components
US9286641B2 (en) * 2011-10-19 2016-03-15 Facebook, Inc. Automatic photo capture based on social components and identity recognition
US9317458B2 (en) * 2012-04-16 2016-04-19 Harman International Industries, Incorporated System for converting a signal
US20140244921A1 (en) * 2013-02-26 2014-08-28 Nvidia Corporation Asymmetric multithreaded fifo memory
US9066007B2 (en) * 2013-04-26 2015-06-23 Skype Camera tap switch
JP6134803B2 (en) * 2013-09-12 2017-05-24 日立マクセル株式会社 Video recording apparatus and camera function control program
US9544649B2 (en) * 2013-12-03 2017-01-10 Aniya's Production Company Device and method for capturing video
JP6741582B2 (en) * 2014-03-19 2020-08-19 コンティ テミック マイクロエレクトロニック ゲゼルシャフト ミット ベシュレンクテル ハフツングConti Temic microelectronic GmbH Method for storing camera image data in vehicle accident data memory
US10846257B2 (en) * 2014-04-01 2020-11-24 Endance Technology Limited Intelligent load balancing and high speed intelligent network recorders
US10455150B2 (en) * 2014-05-27 2019-10-22 Stephen Chase Video headphones, systems, helmets, methods and video content files
US9787887B2 (en) * 2015-07-16 2017-10-10 Gopro, Inc. Camera peripheral device for supplemental audio capture and remote control of camera

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160093335A1 (en) * 2014-09-30 2016-03-31 Apple Inc. Time-Lapse Video Capture With Temporal Points Of Interest
US20160180886A1 (en) * 2014-12-19 2016-06-23 Facebook, Inc. Systems and methods for combining drawing and videos prior to buffer storage
US20170117018A1 (en) * 2014-12-19 2017-04-27 Facebook, Inc. Systems and methods for combining drawings and videos prior to buffer storage
US20160344924A1 (en) * 2015-05-08 2016-11-24 Albert Tsai System and Method for Preserving Video Clips from a Handheld Device

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170358322A1 (en) * 2007-03-07 2017-12-14 Operem, Llc Method and apparatus for initiating a live video stream transmission
US10847184B2 (en) * 2007-03-07 2020-11-24 Knapp Investment Company Limited Method and apparatus for initiating a live video stream transmission
CN111464862A (en) * 2020-04-24 2020-07-28 张咏 Video screenshot method based on voice recognition and image processing
US20220004773A1 (en) * 2020-07-06 2022-01-06 Electronics And Telecommunications Research Institute Apparatus for training recognition model, apparatus for analyzing video, and apparatus for providing video search service
US11886499B2 (en) * 2020-07-06 2024-01-30 Electronics And Telecommunications Research Institute Apparatus for training recognition model, apparatus for analyzing video, and apparatus for providing video search service

Also Published As

Publication number Publication date
US9848120B2 (en) 2017-12-19
US9900497B2 (en) 2018-02-20
US20160344924A1 (en) 2016-11-24

Similar Documents

Publication Publication Date Title
US9900497B2 (en) System and method for preserving video clips from a handheld device
US11038939B1 (en) Analyzing video, performing actions, sending to person mentioned
US10320876B2 (en) Media production system with location-based feature
US10021302B2 (en) Video recording method and device
WO2017107441A1 (en) Method and device for capturing continuous video pictures
US10225598B2 (en) System and method for visual editing
RU2628108C2 (en) Method of providing selection of video material episode and device for this
CN111522432A (en) Capturing media content according to viewer expressions
CN106412645B (en) To the method and apparatus of multimedia server uploaded videos file
US9966110B2 (en) Video-production system with DVE feature
US11570415B2 (en) Methods, systems, and media for generating a summarized video using frame rate modification
CN104125388A (en) Method for shooting and storing photos and device thereof
US20170244879A1 (en) Automatic Switching Multi-Video
CN104918101B (en) A kind of method, playback terminal and the system of automatic recording program
CN105338259A (en) Video merging method and device
CN102905102A (en) Screen capturing video player and screen capturing method
WO2014110055A1 (en) Mixed media communication
JP2016063477A (en) Conference system, information processing method and program
US20160104507A1 (en) Method and Apparatus for Capturing Still Images and Truncated Video Clips from Recorded Video
WO2015131700A1 (en) File storage method and device
US9413960B2 (en) Method and apparatus for capturing video images including a start frame
CN112422808B (en) Photo acquisition method, media object processing device and electronic equipment
CN203984554U (en) Synchronized video recording and multi-media communication interactive device

Legal Events

Date Code Title Description
AS Assignment

Owner name: FAST MODEL TECHNOLOGY LLC, ILLINOIS

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:TSAI, ALBERT;YIP, DAVID;REEL/FRAME:040817/0060

Effective date: 20161231

AS Assignment

Owner name: FASTMODEL HOLDINGS LLC, ILLINOIS

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:TSAI, ALBERT;YIP, DAVID;SIGNING DATES FROM 20170718 TO 20170915;REEL/FRAME:043736/0661

STCF Information on status: patent grant

Free format text: PATENTED CASE

AS Assignment

Owner name: FASTMODEL HOLDINGS LLC, ILLINOIS

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:FASTMODEL TECHNOLOGY LLC;REEL/FRAME:044886/0454

Effective date: 20180208

FEPP Fee payment procedure

Free format text: MAINTENANCE FEE REMINDER MAILED (ORIGINAL EVENT CODE: REM.); ENTITY STATUS OF PATENT OWNER: SMALL ENTITY

LAPS Lapse for failure to pay maintenance fees

Free format text: PATENT EXPIRED FOR FAILURE TO PAY MAINTENANCE FEES (ORIGINAL EVENT CODE: EXP.); ENTITY STATUS OF PATENT OWNER: SMALL ENTITY

STCH Information on status: patent discontinuation

Free format text: PATENT EXPIRED DUE TO NONPAYMENT OF MAINTENANCE FEES UNDER 37 CFR 1.362

FP Lapsed due to failure to pay maintenance fee

Effective date: 20220220