US20190206445A1 - Systems and methods for generating highlights for a video - Google Patents

Systems and methods for generating highlights for a video Download PDF

Info

Publication number
US20190206445A1
US20190206445A1 US16/296,574 US201916296574A US2019206445A1 US 20190206445 A1 US20190206445 A1 US 20190206445A1 US 201916296574 A US201916296574 A US 201916296574A US 2019206445 A1 US2019206445 A1 US 2019206445A1
Authority
US
United States
Prior art keywords
video
user
moment
interaction
criterion
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US16/296,574
Inventor
Joven Matias
Tyler Gee
James Balnaves
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
GoPro Inc
Original Assignee
GoPro Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by GoPro Inc filed Critical GoPro Inc
Priority to US16/296,574 priority Critical patent/US20190206445A1/en
Assigned to GOPRO, INC. reassignment GOPRO, INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: GEE, TYLER, BALNAVES, JAMES, MATIAS, JOVEN
Publication of US20190206445A1 publication Critical patent/US20190206445A1/en
Assigned to GOPRO, INC. reassignment GOPRO, INC. RELEASE OF PATENT SECURITY INTEREST Assignors: JPMORGAN CHASE BANK, N.A., AS ADMINISTRATIVE AGENT
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • G06K9/00718
    • G06K9/2081
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B27/00Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
    • G11B27/02Editing, e.g. varying the order of information signals recorded on, or reproduced from, record carriers
    • G11B27/031Electronic editing of digitised analogue information signals, e.g. audio or video signals
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B27/00Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
    • G11B27/10Indexing; Addressing; Timing or synchronising; Measuring tape travel
    • G11B27/19Indexing; Addressing; Timing or synchronising; Measuring tape travel by using information detectable on the record carrier
    • G11B27/28Indexing; Addressing; Timing or synchronising; Measuring tape travel by using information detectable on the record carrier by using information signals recorded by the same method as the main recording
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B27/00Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
    • G11B27/10Indexing; Addressing; Timing or synchronising; Measuring tape travel
    • G11B27/34Indicating arrangements 
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/44Event detection

Definitions

  • This disclosure relates to systems and methods that generate highlights for a video.
  • Video applications may allow a user to manually identify highlight moments within a video. Manually identifying highlight moments may be time consuming and may discourage users from identifying highlight moments.
  • a video may be accessed. Criteria for identifying a moment of interest within the video based on a user interaction with a portion of the video may be obtained. Interaction information indicating the user interaction with the portion of the video may be received. The interaction information for the portion of the video may be compared with the criteria. Responsive to the interaction information for the portion of the video indicating the user interaction with the portion of the video satisfying the criteria, a moment in the video corresponding to the portion of the video may be associated with the moment of interest.
  • a system that generates highlights for a video may include one or more physical processors, and/or other components.
  • the one or more physical processors may be configured by machine-readable instructions. Executing the machine-readable instructions may cause the one or more physical processors to facilitate generating highlights for a video.
  • the machine-readable instructions may include one or more computer program components.
  • the computer program components may include one or more of an access component, a criteria component, an interaction information component, a comparison component, an association component, and/or other computer program components.
  • the access component may be configured to access one or more videos and/or other information.
  • the access component may access one or more videos and/or other information stored in electronic storage and/or in other locations.
  • a video may include a video clip captured by a video capture device, multiple video clips captured by a video capture device, and/or multiple video clips captured by separate video capture devices.
  • a video may include multiple video clips captured at the same time and/or multiple video clips captured at different times.
  • a video may include a video clip processed by a video application, multiple video clips processed by a video application and/or multiple video clips processed by separate video applications.
  • the criteria component may be configured to obtain one or more criteria for identifying a moment of interest within a video based on one or more user interactions with a portion of the video.
  • a user interaction with a portion of a video may include a consumption of the portion of the video, a transformation of the portion of the video, and/or other user interactions with the portion of the video.
  • a user interaction with a portion of the video that satisfies one or more criteria for identifying a moment of interest within a video may include viewing of the portion of the video, selection of the portion of the video, zooming on the portion of the video, sharing of the portion of the video, extraction of an image from the portion of the video, a visual modification of the portion of the video, a timing modification of the portion of the video, an audio modification of the portion of the video, and/or other user interactions with the portion of the video.
  • the comparison component may be configured to compare interaction information for a portion of a video with one or more criteria for identifying a moment of interest within the video. Based on the comparison, the comparison component may be configured to determine whether the interaction information for the portion of the video satisfies one or more criteria for identifying the moment of interest within the video.
  • the association component may be configured to, responsive to interaction information for a portion of the video indicating one or more user interactions with the portion of the video satisfying one or more criteria, associate a moment in the video corresponding to the portion of the video with the moment of interest.
  • the association component may be further configured to, responsive to interaction information for a portion of the video indicating one or more user interactions with the portion of the video satisfying one or more criteria, associate a moment in a high resolution video corresponding to the portion of the video with the moment of interest.
  • the high resolution video may be characterized by a higher resolution than the video.
  • the high resolution video may be characterized by a higher framerate than the video.
  • FIG. 1 illustrates a system for generating highlights for a video.
  • FIG. 2 illustrates a method for generating highlights for a video.
  • FIG. 3 illustrates exemplary devices connected to a network.
  • FIG. 4 illustrates an example of a graphical user interface of a video application.
  • FIG. 6 illustrates an example of a video list panel on a graphical user interface of a video application.
  • FIG. 1 illustrates system 10 for generates highlights for a video.
  • System 10 may include one or more of processor 11 , electronic storage 12 , interface 13 (e.g., bus, wireless interface, etc.), and/or other components.
  • a video may be accessed by processor 11 .
  • Criteria for identifying a moment of interest within the video based on a user interaction with a portion of the video may be obtained.
  • Interaction information indicating the user interaction with the portion of the video may be received.
  • the interaction information for the portion of the video may be compared with the criteria. Responsive to the interaction information for the portion of the video indicating the user interaction with the portion of the video satisfying the criteria, a moment in the video corresponding to the portion of the video may be associated with the moment of interest.
  • System 10 may be embodied in a single device or multiple devices.
  • FIG. 3 illustrates exemplary devices connected via network 36 .
  • Devices connected via network 36 may include server 31 , camera 32 , mobile device 33 , desktop device 34 , and/or other device 35 .
  • Devices may be connected via other communication interface (e.g., bus, wireless interface, etc.).
  • system 10 may be embodied in one of the devices shown in FIG. 3 or in multiple devices shown in FIG. 3 .
  • one or more functionalities described herein to processor 11 for generating highlights for a video may be performed by a device containing electronic storage 12 and/or a device separate from electronic storage 12 .
  • one or more functionalities described herein to processor 11 for generating highlights for a video may be included in camera 32 and the video may be stored in electronic storage of camera 32 .
  • Camera 32 may be used to generate highlights for the video stored in electronic storage of camera 32 .
  • One or more functionalities described herein to processor 11 for generating highlights for a video may be included in mobile device 33 and the video may be stored in electronic storage of camera 32 .
  • Mobile device 33 may be used to generate highlights for the video stored in electronic storage of camera 32 .
  • Other uses of single devices and combinations of devices to generate highlights for a video are contemplated.
  • Electronic storage 12 may include electronic storage medium that electronically stores information.
  • Electronic storage 12 may store software algorithms, information determined by processor 11 , information received remotely, and/or other information that enables system 10 to function properly.
  • electronic storage 12 may store information relating to videos, criteria for identifying a moment of interest, interaction information, user interactions with portion of videos, moments of interest, and/or other information.
  • Processor 11 may be configured to provide information processing capabilities in system 10 .
  • processor 11 may comprise one or more of a digital processor, an analog processor, a digital circuit designed to process information, a central processing unit, a graphics processing unit, a microcontroller, an analog circuit designed to process information, a state machine, and/or other mechanisms for electronically processing information.
  • Processor 11 may be configured to execute one or more machine readable instructions 100 to facilitate generating highlights for a video.
  • Machine-readable instructions 100 may include one or more computer program components.
  • Machine readable instructions 100 may include one or more of access component 102 , criteria component 104 , interaction information component 106 , comparison component 108 , association component 110 , and/or other computer program components.
  • Access component 102 may be configured to access one or more videos and/or other information.
  • a video may include a video clip captured by a video capture device, multiple video clips captured by a video capture device, and/or multiple video clips captured by separate video capture devices.
  • a video may include multiple video clips captured at the same time and/or multiple video clips captured at different times.
  • a video may include a video clip processed by a video application, multiple video clips processed by a video application and/or multiple video clips processed by separate video applications.
  • Access component 102 may access one or more videos and/or other information stored in electronic storage (e.g., electronic storage 12 , etc.) and/or in other locations. Access component 102 may be configured to access one or more videos and/or other information during acquisition of the video or after acquisition of the video. For example, access component 102 may access a video while the video is being captured by one or more cameras/image sensors. Access component 102 may obtain a video after the video has been captured and stored in memory (e.g., electronic storage 12 , etc.).
  • electronic storage e.g., electronic storage 12 , etc.
  • Criteria component 104 may be configured to obtain one or more criteria for identifying a moment of interest within a video based on one or more user interactions with a portion of the video.
  • a portion of a video may correspond to a point in time within a video or a duration of time within the video.
  • a user interaction with a portion of a video may refer to one or more actions prompted by a user on the portion of the video.
  • a user interaction with a portion of a video may occur at an instant in time or over a period of time during a play back duration of the video.
  • a user interaction with a portion of a video may include a consumption of the portion of the video, a transformation of the portion of the video, and/or other user interactions with the portion of the video.
  • a portion of the video may refer to image and/or audio information at a specific instant or over a period of time within the playback duration of the video.
  • a consumption of a portion of a video may refer to an action prompted by a user relating to viewing the portion of the video.
  • a transformation of a portion of a video may refer to an action prompted by a user relating to changes in the portion of the video.
  • a user interaction with a portion of the video that satisfies one or more criteria for identifying a moment of interest within a video may include viewing of the portion of the video.
  • One or more criteria may require one or more particular viewings of the portion of the video.
  • a particular viewing of the portion of the video may include a single viewing of the portion of the video.
  • one or more criteria may require a user interaction with the portion of the video to view the portion of the video at least once.
  • the portion of the video may be required to be viewed within a certain duration of time.
  • the portion of the video may be required to be viewed within a certain duration of time after the video has been captured, stored, received, and/or accessed.
  • a particular viewing of the portion of the video may include repeated viewing of the portion of the video.
  • one or more criteria may require a user interaction with the portion of the video to view the portion of the video a certain number of times.
  • the portion of the video may be required to be repeatedly viewed within a certain duration of time.
  • the portion of the video may be required to be repeatedly viewed within a certain duration of time after the video has been captured, stored, received, and/or accessed.
  • the portion of the video may be required to be repeatedly viewed without viewing other portions of the video and/or other videos.
  • the portion of the video may be required to be viewed twice in a row.
  • a particular viewing of the portion of the video may include pausing the viewing during the portion of the video.
  • one or more criteria may require a user interaction with the portion of the video to pause the viewing during the portion of the video for at least an instance or for a certain duration of time.
  • a certain duration of time may include a maximum duration after which the one or more criteria may not be satisfied.
  • a criteria may require a portion of a video to be paused for more than 5 seconds but not more than 2 minutes. Setting a maximum duration may prevent a pause of a portion of the video from satisfying one or more criteria when a user is otherwise inactive with the portion of the video (e.g., a user has stepped away from a device showing the portion of the video after pausing the video, etc.).
  • a maximum duration may not apply if a user is active with the portion of the video while the portion of the video is paused (e.g., a user is editing a portion of the video and/or obtaining information about the portion of the video while the portion of the video is paused, etc.).
  • a particular viewing of the portion of the video may include seeking and playing the portion of the video.
  • one or more criteria may require a user interaction with the portion of the video to fast forward and/or reverse to the portion of the video and view the portion of the video.
  • One or more criteria may require the fast forward and/or reverse to be accomplished at a certain speed, under a certain speed, or over a certain speed.
  • a particular viewing of the portion of the video may include playing the portion of the video based on a suggestion.
  • one or more criteria may require a user interaction with the portion of the video to view the portion of the video based on a suggestion to view the portion from another user, from a video application (e.g., automatic suggestion of highlights from visual/metadata analysis, etc.), and/or from other suggestion sources.
  • a video application e.g., automatic suggestion of highlights from visual/metadata analysis, etc.
  • Other particular views of the portion of the video are contemplated.
  • a user interaction with a portion of the video that satisfies one or more criteria for identifying a moment of interest within a video may include selection of the portion of the video.
  • One or more criteria may require one or more particular selections of the portion of the video.
  • a particular selection of the portion of the video may include a single selection of the portion of the video.
  • one or more criteria may require a user interaction with the portion of the video to select the portion of the video at least once (e.g., selecting a portion of a video between 0.5 seconds and 5 seconds time interval, etc.).
  • a particular selection of the portion of the video may include repeated selection of the portion of the video.
  • one or more criteria may require a user interaction with the portion of the video to select the portion of the video a certain number of times.
  • the portion of the video may be required to be repeatedly selected within a certain duration of time.
  • the portion of the video may be required to be repeatedly selected within a certain duration of time after the video has been captured, stored, received, and/or accessed.
  • a particular selection of the portion of the video may include using a pointer (e.g., a cursor, etc.) to hover over the portion of the video.
  • a pointer e.g., a cursor, etc.
  • one or more criteria may require a user interaction with the portion of the video to hover a pointer over the portion of the video for a certain duration time.
  • a certain duration of time may include a maximum duration after which the one or more criteria may not be satisfied.
  • one or more criteria may require hovering over a portion of a video for more than 1 seconds but not more than 1 minute.
  • Setting a maximum duration may prevent a hovering over a portion of the video from satisfying one or more criteria when a user is otherwise inactive with the portion of the video (e.g., a user has stepped away from a device showing the portion of the video after leaving the cursor hovering over the portion of the video, etc.).
  • a maximum duration may not apply if a user is active with the portion of the video while hovering over the portion of the video (e.g., a user is editing a portion of the video and/or obtaining information about the portion of the video while hovering over the portion of the video, etc.).
  • Other particular selections of the portion of the video are contemplated.
  • a user interaction with a portion of the video that satisfies one or more criteria for identifying a moment of interest within a video may include zooming on the portion of the video.
  • One or more criteria may require one or more particular zooms on the portion of the video.
  • a particular zoom on the portion of the video may include increasing or decreasing a zoom level at which the portion of the video is viewed.
  • one or more criteria may require a user interaction with the portion of the video to view the portion of the video at 2 ⁇ zoom level.
  • One or more criteria may require a user interaction with the portion of the video to change the zoom level once or multiple times (e.g., from 1 ⁇ zoom to 2 ⁇ zoom to 0.5 ⁇ zoom, etc.).
  • Other particular zooms of the portion of the video are contemplated.
  • a user interaction with a portion of the video that satisfies one or more criteria for identifying a moment of interest within a video may include sharing of the portion of the video.
  • One or more criteria may require one or more particular sharing of the portion of the video.
  • a particular sharing of the portion of the video may include sharing one or images from the portion of the video and/or sharing one or more video clips from the portion of the video.
  • one or more criteria may require a user interaction with the portion of the video to share a single image from the portion of the video, share multiple images from the portion of the video, share a video clip corresponding to the portion of the video, and/or share video clips corresponding to the portion of the video.
  • Other particular sharing of the portion of the video are contemplated.
  • a user interaction with a portion of the video that satisfies one or more criteria for identifying a moment of interest within a video may include extraction of an image or a video clip from the portion of the video.
  • One or more criteria may require one or more particular extractions of the portion of the video.
  • a particular extraction of the portion of the video may include extracting one or images from the portion of the video and/or extracting one or more video clips from the portion of the video.
  • one or more criteria may require a user interaction with the portion of the video to extract a single image from the portion of the video, extract multiple images from the portion of the video, extract a video clip corresponding to the portion of the video, and/or extract one or more video clips corresponding to the portion of the video.
  • Other particular extractions of the portion of the video are contemplated.
  • a user interaction with a portion of the video that satisfies one or more criteria for identifying a moment of interest within a video may include a visual modification of the portion of the video.
  • One or more criteria may require one or more particular visual modifications of the portion of the video.
  • a particular visual modification of the video may include one or more changes in visuals to one or more images and/or one or more video clips from the portion of the video.
  • one or more criteria may require a user interaction with the portion of the video to transform one or more visual characteristics of a single image from the portion of the video, multiple images from the portion of the video, a video clip corresponding to the portion of the video, and/or video clips corresponding to the portion of the video.
  • One or more transformations of visual characteristics may include cropping, color changes, brightness changes, contrast changes, image warping, image blurring, aspect ratio changes, resolution changes, framerate changes, transition effects, special effects, addition of visuals (e.g., other image, video, text, color, etc.), removal of visuals, and/or other changes in visuals.
  • Other particular visual modifications of the portion of the video are contemplated.
  • a user interaction with a portion of the video that satisfies one or more criteria for identifying a moment of interest within a video may include a timing modification of the portion of the video.
  • One or more criteria may require one or more particular timing modifications of the portion of the video.
  • a particular timing modification of the video may include one or more changes in timing of one or more parts of the video.
  • one or more criteria may require a user interaction with the portion of the video to change a length and/or a speed of a portion of the video.
  • a change in length and/or speed of a portion of a video may be local (specific to a part of the portion of the video) or global (applicable to the entire portion of the video).
  • a change in length of a portion of a video may include increasing the length of a portion of a video, decreasing the length of a portion of a video, trimming a portion of a video, and/or other changes in length of a portion of a video.
  • a change in speed of a portion of a video may include a slow motion effect, a fast motion effect, a time freeze effect, a speed ramp effect, and/or other changes in speed of a portion of a video.
  • a slow motion effect may decrease the speed with which a portion of a video may be presented during playback.
  • a fast motion effect may increase the speed with which a portion of a video may be presented during playback.
  • a slow motion effect and/or a fast motion effect may be linear or dynamic.
  • a time freeze effect may, for a duration of time, freeze a portion of a video.
  • a speed ramp effect may decrease the speed with which a part of a portion of a video may be presented during playback and increase the speed with which another part of the portion of a video may be presented during playback. Other particular timing modifications of the portion of the video are contemplated.
  • a user interaction with a portion of the video that satisfies one or more criteria for identifying a moment of interest within a video may include an audio modification of the portion of the video.
  • One or more criteria may require one or more particular audio modifications of the portion of the video.
  • a particular audio modification of the video may include one or more changes in audio within the portion of the video.
  • one or more criteria may require a user interaction with the portion of the video to transform one or more audio characteristics of one or more audio within the portion of the video.
  • One or more transformations of audio characteristics may include changes to a beat, a tempo, a rhythm, a volume, a frequency, a start, an end, and/or other audio characteristics.
  • Other particular audio modifications of the portion of the video are contemplated.
  • a user interaction with a portion of the video that satisfies one or more criteria for identifying a moment of interest within a video may include one or more of the user interactions described above and/or other user interactions of the portion of the video.
  • one or more criteria may require one or more particular visual modifications of the portion of the video and one or more sharing of the portion of the video.
  • Interaction information component 106 may be configured to receive interaction information.
  • Interaction information may indicate one or more user interactions with a portion of a video.
  • Interaction information may be received at a time or over a period of time.
  • Interaction information may be received based on one or more user input.
  • One or more user input may be received via one or more graphical user interfaces of one or more video applications.
  • a video application may refer to one or more software, one or more software running on one or more hardware, and/or other applications operating to present video on a display.
  • a video application may include one or more of video viewer, video editor, and/or other video applications.
  • a video application may run on one or more of a mobile device, a desktop device, a camera, and/or other hardware.
  • FIG. 4 illustrates an example of graphical user interface 400 of a video application.
  • Graphical user interface 400 may include one or more of playback panel 401 , control panel 402 , storyboard panel 403 , and/or other panels. Individual panels of graphical user interface 400 may present different information.
  • playback panel 401 may present playback of one or more portion of one or more videos.
  • Control panel 402 may present playback options for playback and/or transformation options for transformation of one or more portions of one or more videos.
  • control panel 402 may present playback options and/or transformation options for one or more portions of video A 411 , video B 412 , and/or video C 413 .
  • Storyboard panel 403 may present information related to one or more videos.
  • storyboard panel 403 may present information relating to titles, durations, dates, highlights, and/or other information relating to videos.
  • storyboard panel 403 may present information related to video A 411 , video B 412 , and video C 413 .
  • Storyboard panel 403 may present one or more videos based on the order and/or duration of the videos.
  • video A 411 , video B 412 , and video C 413 may be included in a video presentation.
  • Storyboard panel 403 may present video A 411 , video B 412 , and video C 413 in the order in which they appear in the video presentation.
  • Storyboard panel 403 may allow a user to change the order of videos.
  • Storyboard panel 403 may present videos based on the durations of videos. For example, the sizes of video A 411 , video B 412 , and video C 413 in storyboard panel 403 may correspond to the durations of video A 411 , video B 412 , and video C 413 in the video presentation. Other appearances and types of graphical user interface/panels are contemplated.
  • Comparison component 108 may be configured to compare interaction information for a portion of a video with one or more criteria for identifying a moment of interest within the video. Based on the comparison, comparison component 108 may be configured to determine whether the interaction information for the portion of the video satisfies one or more criteria for identifying the moment of interest within the video.
  • Association component 110 may be configured to, responsive to interaction information for a portion of the video indicating one or more user interactions with the portion of the video satisfying one or more criteria, associate a moment in the video corresponding to the portion of the video with the moment of interest.
  • a moment in the video corresponding to the portion of the video may correspond to a point in time within a video or a duration of time within the video.
  • a point in time within a video may correspond to a beginning of the portion of the video, a center of the portion of the video, an end of the portion of the video, or other instant in time within the portion of the video.
  • An association of a moment in a video with a moment of interest may be presented via a graphical user interface of a video application.
  • one or more associations of moments in videos with moments of interest may be shown in storyboard panel 403 of FIG. 4 .
  • FIGS. 5A-5B illustrate examples of associations of moments in videos with moments of interest shown in storyboard panel 403 .
  • Associations of moments in videos with moments of interest may be shown as circles (e.g., highlight A 501 , highlight B 502 ).
  • Other displays of associations of moments in videos with moments of interests are contemplated.
  • a moment within video A 411 may be associated with a moment of interest (highlight A 501 ) and a moment within video B 412 may be associated with a moment of interest (highlight B 502 ).
  • Highlight A 501 and highlight B 502 may be associated by association component 110 based on comparison component 108 determining that interaction information for a portion of video A 411 corresponding to highlight A 501 satisfied one or more criteria and that interaction information for a portion of video B 412 corresponding to highlight B 502 satisfied one or more criteria.
  • Video A 411 , video B 412 , and/or video C 413 may include other associations.
  • interaction information for a portion of video A 411 may indicate a user interaction with the portion of video A 411 to save a still image (e.g., via image extraction) from video A. Based on this interaction information satisfying one or more criteria, association component 110 may associate a point in time corresponding to the still image with a moment of interest as highlight A 501 .
  • interaction information for a portion of video B 412 may indicate a user interaction with the portion of video B 412 to select the portion of video B 412 (e.g., selecting a portion of video B 412 between 0.5 seconds and 5 seconds time interval).
  • association component 110 may associate a point in time corresponding to a center of the portion of video B 412 (e.g., 2.75 second mark of video B 412 ) with a moment of interest as highlight B 502 .
  • a graphical user interface of a video application may display highlight moments based on user tagging.
  • graphical user interface 400 may display highlight moments based on direct markings of moments corresponding to portions of videos as moments of interest, user comments corresponding to portions of videos, user labeling corresponding to portions of videos, and/or other user tags.
  • User tagging may be accomplished during capture of a video or after capture of a video.
  • association component 110 may be configured to prompt a user's approval of associating a moment in a video corresponding to a portion of the video with a moment of interest.
  • Association component 110 may allow a user to individually approve association of individual moments in the video with moments of interest and/or approval some or all of associations at once. For example, association component 110 may present to the user the associations made in a chronological order and/or based on types of moments of interest. Association component 110 may allow a user to accept all associations by saving a copy of a video with new associations. Responsive to reception of the user's approval, association component 110 may be configured to confirm the association of the moment in the video corresponding to the portion of the video with the moment of interest. Responsive to refusal of the user approval, association component 110 may be configured to remove the association(s).
  • storyboard panel 403 may display highlight A 501 , highlight B 502 , and highlight C 503 .
  • Associations of highlight A 501 and highlight B 502 may have been confirmed by a user.
  • Association of highlight C 503 may not have been confirmed by a user.
  • Confirmed associations and unconfirmed associations may be displayed differently. For example, confirmed associations and unconfirmed associations may be shown in different colors (e.g., yellow for confirmed association and blue for unconfirmed association, etc.). Confirmed associations and unconfirmed associations may be shown in different shapes (e.g., circle for confirmed association and square for unconfirmed association, solid circle for confirmed association and dotted circle for unconfirmed association, etc.). Other ways of distinguishing confirmed associations from unconfirmed associations are contemplated.
  • associations of moments in a video with moments of interest may be shown in other parts of graphical user interface 400 .
  • graphical user interface 400 may include video list panel 601 (shown in FIG. 6 ).
  • Video list panel 601 may present one or more videos for inclusion in a video presentation.
  • video list panel 601 may allow a user to select video A 411 , video B 412 , video C 413 , and/or other video 602 for inclusion in a video presentation shown in storyboard panel 403 .
  • Video list panel 601 may display associations of a moment in a video with a moment of interest.
  • FIG. 6 video list panel 601 may display highlight A 501 in video A 411 , highlight B 502 in video B 412 , and highlight C 503 in video C 413 .
  • associate component 110 may be configured to indicate a type of moment of interest.
  • a type of moment of interest may indicate criteria satisfied by the interaction information or a classification of criteria satisfied by the interaction information.
  • association component 110 may tag a highlight moment with information that indicates the criteria satisfied by the interaction information.
  • Association component 110 may tag a highlight moment with information based on classification of criteria satisfied by the interaction information.
  • the type of moment of interest may be shown on a graphical user interface of a video application.
  • association component 110 may be further configured to, responsive to interaction information for a portion of the video indicating one or more user interactions with the portion of the video satisfying one or more criteria, associate a moment in a high resolution video corresponding to the portion of the video with the moment of interest.
  • the high resolution video may be characterized by a higher resolution than the video.
  • the high resolution video may be characterized by a higher framerate than the video.
  • camera 32 may include a video and a high resolution video.
  • the high resolution video may have been captured by camera 32 and the video (having a lower resolution and/or lower framerate than the high resolution video) may have been generated from the high resolution video.
  • Camera 32 may access the video and obtain one or more criteria for identifying a moment of interest within the video.
  • a user may operate camera 32 to interact with the video and camera 32 may compare interaction information for a portion of the video with one or more criteria. Responsive to the interaction information satisfying one or more criteria, camera 32 may associate a moment in the video corresponding to the portion of the video with the moment of interest and associate a moment in the high resolution video corresponding to the portion of the video with the moment of interest.
  • camera 32 may include a video and a high resolution video, and camera 32 may send the video to mobile device 33 .
  • Mobile device 33 may access the video and obtain one or more criteria for identifying a moment of interest within the video.
  • a user may operate mobile device 33 to interact with the video and mobile device 33 may compare interaction information for a portion of the video with one or more criteria. Responsive to the interaction information satisfying one or more criteria, mobile device 33 may associate a moment in the video corresponding to the portion of the video with the moment of interest and associate a moment in the high resolution video corresponding to the portion of the video with the moment of interest.
  • Mobile device 33 may indirectly associate the moment in the high resolution video with the moment of interest.
  • mobile device 33 may send one or more commands to camera 32 to associate the moment in the high resolution video with the moment of interest.
  • Mobile device 33 may send information relating to association of the moment in the video with the moment of interest to camera 32 .
  • Camera 32 may use the information relating to association of the moment in the video with the moment of interest to associate the moment in the high resolution video with the moment of interest.
  • While the present disclosure may be directed to videos, one or more other implementations of the system may be configured for other types media content.
  • Other types of media content may include one or more of audio content (e.g., music, podcasts, audio books, and/or other audio content), multimedia presentations, photos, slideshows, and/or other media content.
  • processor 11 and electronic storage 12 are shown to be connected to an interface 13 in FIG. 1 , any communication medium may be used to facilitate interaction between any components of system 10 .
  • One or more components of system 10 may communicate with each other through hard-wired communication, wireless communication, or both.
  • one or more components of system 10 may communicate with each other through a network.
  • processor 11 may wirelessly communicate with electronic storage 12 .
  • wireless communication may include one or more of radio communication, Bluetooth communication, Wi-Fi communication, cellular communication, infrared communication, or other wireless communication. Other types of communications are contemplated by the present disclosure.
  • processor 11 may comprise a plurality of processing units. These processing units may be physically located within the same device, or processor 11 may represent processing functionality of a plurality of devices operating in coordination. Processor 11 may be configured to execute one or more components by software; hardware; firmware; some combination of software, hardware, and/or firmware; and/or other mechanisms for configuring processing capabilities on processor 11 .
  • FIG. 1 it should be appreciated that although computer components are illustrated in FIG. 1 as being co-located within a single processing unit, in implementations in which processor 11 comprises multiple processing units, one or more of computer program components may be located remotely from the other computer program components.
  • processor 11 may be configured to execute one or more additional computer program components that may perform some or all of the functionality attributed to one or more of computer program components 102 , 104 , 106 , 108 , and/or 110 described herein.
  • the electronic storage media of electronic storage 12 may be provided integrally (i.e., substantially non-removable) with one or more components of system 10 and/or removable storage that is connectable to one or more components of system 10 via, for example, a port (e.g., a USB port, a Firewire port, etc.) or a drive (e.g., a disk drive, etc.).
  • a port e.g., a USB port, a Firewire port, etc.
  • a drive e.g., a disk drive, etc.
  • Electronic storage 12 may include one or more of optically readable storage media (e.g., optical disks, etc.), magnetically readable storage media (e.g., magnetic tape, magnetic hard drive, floppy drive, etc.), electrical charge-based storage media (e.g., EPROM, EEPROM, RAM, etc.), solid-state storage media (e.g., flash drive, etc.), and/or other electronically readable storage media.
  • Electronic storage 12 may be a separate component within system 10 , or electronic storage 12 may be provided integrally with one or more other components of system 10 (e.g., processor 11 ). Although electronic storage 12 is shown in FIG. 1 as a single entity, this is for illustrative purposes only. In some implementations, electronic storage 12 may comprise a plurality of storage units. These storage units may be physically located within the same device, or electronic storage 12 may represent storage functionality of a plurality of devices operating in coordination.
  • FIG. 2 illustrates method 200 for generating highlights for a video.
  • the operations of method 200 presented below are intended to be illustrative. In some implementations, method 200 may be accomplished with one or more additional operations not described, and/or without one or more of the operations discussed. In some implementations, two or more of the operations may occur substantially simultaneously.
  • method 200 may be implemented in one or more processing devices (e.g., a digital processor, an analog processor, a digital circuit designed to process information, a central processing unit, a graphics processing unit, a microcontroller, an analog circuit designed to process information, a state machine, and/or other mechanisms for electronically processing information).
  • the one or more processing devices may include one or more devices executing some or all of the operations of method 200 in response to instructions stored electronically on one or more electronic storage mediums.
  • the one or more processing devices may include one or more devices configured through hardware, firmware, and/or software to be specifically designed for execution of one or more of the operations of method 200 .
  • a video may be accessed.
  • a video may be accessed from electronic storage and/or from other locations.
  • operation 201 may be performed by a processor component the same as or similar to access component 102 (shown in FIG. 1 and described herein).
  • criteria for identifying a moment of interest within a video may be obtained.
  • a moment of interest within a video may be identified based on a user interaction with a portion of the video.
  • a user interaction with a portion of a video may include a consumption of the portion of the video, a transformation of the portion of the video, and/or the other user interactions with the portion of the video.
  • operation 202 may be performed by a processor component the same as or similar to criteria component 104 (shown in FIG. 1 and described herein).
  • interaction information indicating the user interaction with the portion of the video may be received.
  • Interaction information may be received at a time or over a period of time.
  • operation 203 may be performed by a processor component the same as or similar to interaction information component 106 (shown in FIG. 1 and described herein).
  • the interaction information for the portion of the video may be compared with the criteria.
  • operation 204 may be performed by a processor component the same as or similar to comparison component 108 (shown in FIG. 1 and described herein).
  • operation 205 responsive to the interaction information for the portion of the video indicating the user interaction with the portion of the video satisfying the criteria, a moment in the video corresponding to the portion of the video may be associated with the moment of interest.
  • operation 205 may be performed by a processor component the same as or similar to association component 110 (shown in FIG. 1 and described herein).

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Television Signal Processing For Recording (AREA)

Abstract

This disclosure relates to systems and methods that generate highlights for a video. A video may be accessed. Criteria for identifying a moment of interest within the video based on a user interaction with a portion of the video may be obtained. Interaction information indicating the user interaction with the portion of the video may be received. The interaction information for the portion of the video may be compared with the criteria. Responsive to the interaction information for the portion of the video indicating the user interaction with the portion of the video satisfying the criteria, a moment in the video corresponding to the portion of the video may be associated with the moment of interest.

Description

    FIELD
  • This disclosure relates to systems and methods that generate highlights for a video.
  • BACKGROUND
  • Video applications may allow a user to manually identify highlight moments within a video. Manually identifying highlight moments may be time consuming and may discourage users from identifying highlight moments.
  • SUMMARY
  • This disclosure relates to generating highlights for a video. A video may be accessed. Criteria for identifying a moment of interest within the video based on a user interaction with a portion of the video may be obtained. Interaction information indicating the user interaction with the portion of the video may be received. The interaction information for the portion of the video may be compared with the criteria. Responsive to the interaction information for the portion of the video indicating the user interaction with the portion of the video satisfying the criteria, a moment in the video corresponding to the portion of the video may be associated with the moment of interest.
  • A system that generates highlights for a video may include one or more physical processors, and/or other components. The one or more physical processors may be configured by machine-readable instructions. Executing the machine-readable instructions may cause the one or more physical processors to facilitate generating highlights for a video. The machine-readable instructions may include one or more computer program components. The computer program components may include one or more of an access component, a criteria component, an interaction information component, a comparison component, an association component, and/or other computer program components.
  • The access component may be configured to access one or more videos and/or other information. The access component may access one or more videos and/or other information stored in electronic storage and/or in other locations. A video may include a video clip captured by a video capture device, multiple video clips captured by a video capture device, and/or multiple video clips captured by separate video capture devices. A video may include multiple video clips captured at the same time and/or multiple video clips captured at different times. A video may include a video clip processed by a video application, multiple video clips processed by a video application and/or multiple video clips processed by separate video applications.
  • The criteria component may be configured to obtain one or more criteria for identifying a moment of interest within a video based on one or more user interactions with a portion of the video. A user interaction with a portion of a video may include a consumption of the portion of the video, a transformation of the portion of the video, and/or other user interactions with the portion of the video. In some implementations, a user interaction with a portion of the video that satisfies one or more criteria for identifying a moment of interest within a video may include viewing of the portion of the video, selection of the portion of the video, zooming on the portion of the video, sharing of the portion of the video, extraction of an image from the portion of the video, a visual modification of the portion of the video, a timing modification of the portion of the video, an audio modification of the portion of the video, and/or other user interactions with the portion of the video.
  • The interaction information component may be configured to receive interaction information. Interaction information may indicate one or more user interactions with a portion of a video. Interaction information may be received at a time or over a period of time. Interaction information may be received based on one or more user input. One or more user input may be received via one or more graphical user interfaces of one or more video applications.
  • The comparison component may be configured to compare interaction information for a portion of a video with one or more criteria for identifying a moment of interest within the video. Based on the comparison, the comparison component may be configured to determine whether the interaction information for the portion of the video satisfies one or more criteria for identifying the moment of interest within the video.
  • The association component may be configured to, responsive to interaction information for a portion of the video indicating one or more user interactions with the portion of the video satisfying one or more criteria, associate a moment in the video corresponding to the portion of the video with the moment of interest. In some implementations, the association component may be further configured to, responsive to interaction information for a portion of the video indicating one or more user interactions with the portion of the video satisfying one or more criteria, associate a moment in a high resolution video corresponding to the portion of the video with the moment of interest. The high resolution video may be characterized by a higher resolution than the video. The high resolution video may be characterized by a higher framerate than the video.
  • These and other objects, features, and characteristics of the system and/or method disclosed herein, as well as the methods of operation and functions of the related elements of structure and the combination of parts and economies of manufacture, will become more apparent upon consideration of the following description and the appended claims with reference to the accompanying drawings, all of which form a part of this specification, wherein like reference numerals designate corresponding parts in the various figures. It is to be expressly understood, however, that the drawings are for the purpose of illustration and description only and are not intended as a definition of the limits of the invention. As used in the specification and in the claims, the singular form of “a”, “an”, and “the” include plural referents unless the context clearly dictates otherwise.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • FIG. 1 illustrates a system for generating highlights for a video.
  • FIG. 2 illustrates a method for generating highlights for a video.
  • FIG. 3 illustrates exemplary devices connected to a network.
  • FIG. 4 illustrates an example of a graphical user interface of a video application.
  • FIGS. 5A-5B illustrate examples of a storyboard panel on a graphical user interface of a video application.
  • FIG. 6 illustrates an example of a video list panel on a graphical user interface of a video application.
  • DETAILED DESCRIPTION
  • FIG. 1 illustrates system 10 for generates highlights for a video. System 10 may include one or more of processor 11, electronic storage 12, interface 13 (e.g., bus, wireless interface, etc.), and/or other components. A video may be accessed by processor 11. Criteria for identifying a moment of interest within the video based on a user interaction with a portion of the video may be obtained. Interaction information indicating the user interaction with the portion of the video may be received. The interaction information for the portion of the video may be compared with the criteria. Responsive to the interaction information for the portion of the video indicating the user interaction with the portion of the video satisfying the criteria, a moment in the video corresponding to the portion of the video may be associated with the moment of interest.
  • System 10 may be embodied in a single device or multiple devices. For example, FIG. 3 illustrates exemplary devices connected via network 36. Devices connected via network 36 may include server 31, camera 32, mobile device 33, desktop device 34, and/or other device 35. Devices may be connected via other communication interface (e.g., bus, wireless interface, etc.). As a non-limiting example, system 10 may be embodied in one of the devices shown in FIG. 3 or in multiple devices shown in FIG. 3.
  • For example, one or more functionalities described herein to processor 11 for generating highlights for a video may be performed by a device containing electronic storage 12 and/or a device separate from electronic storage 12. For example, one or more functionalities described herein to processor 11 for generating highlights for a video may be included in camera 32 and the video may be stored in electronic storage of camera 32. Camera 32 may be used to generate highlights for the video stored in electronic storage of camera 32. One or more functionalities described herein to processor 11 for generating highlights for a video may be included in mobile device 33 and the video may be stored in electronic storage of camera 32. Mobile device 33 may be used to generate highlights for the video stored in electronic storage of camera 32. Other uses of single devices and combinations of devices to generate highlights for a video are contemplated.
  • Electronic storage 12 may include electronic storage medium that electronically stores information. Electronic storage 12 may store software algorithms, information determined by processor 11, information received remotely, and/or other information that enables system 10 to function properly. For example, electronic storage 12 may store information relating to videos, criteria for identifying a moment of interest, interaction information, user interactions with portion of videos, moments of interest, and/or other information.
  • Processor 11 may be configured to provide information processing capabilities in system 10. As such, processor 11 may comprise one or more of a digital processor, an analog processor, a digital circuit designed to process information, a central processing unit, a graphics processing unit, a microcontroller, an analog circuit designed to process information, a state machine, and/or other mechanisms for electronically processing information. Processor 11 may be configured to execute one or more machine readable instructions 100 to facilitate generating highlights for a video. Machine-readable instructions 100 may include one or more computer program components. Machine readable instructions 100 may include one or more of access component 102, criteria component 104, interaction information component 106, comparison component 108, association component 110, and/or other computer program components.
  • Access component 102 may be configured to access one or more videos and/or other information. A video may include a video clip captured by a video capture device, multiple video clips captured by a video capture device, and/or multiple video clips captured by separate video capture devices. A video may include multiple video clips captured at the same time and/or multiple video clips captured at different times. A video may include a video clip processed by a video application, multiple video clips processed by a video application and/or multiple video clips processed by separate video applications.
  • Access component 102 may access one or more videos and/or other information stored in electronic storage (e.g., electronic storage 12, etc.) and/or in other locations. Access component 102 may be configured to access one or more videos and/or other information during acquisition of the video or after acquisition of the video. For example, access component 102 may access a video while the video is being captured by one or more cameras/image sensors. Access component 102 may obtain a video after the video has been captured and stored in memory (e.g., electronic storage 12, etc.).
  • Criteria component 104 may be configured to obtain one or more criteria for identifying a moment of interest within a video based on one or more user interactions with a portion of the video. A portion of a video may correspond to a point in time within a video or a duration of time within the video. A user interaction with a portion of a video may refer to one or more actions prompted by a user on the portion of the video. A user interaction with a portion of a video may occur at an instant in time or over a period of time during a play back duration of the video. A user interaction with a portion of a video may include a consumption of the portion of the video, a transformation of the portion of the video, and/or other user interactions with the portion of the video. A portion of the video may refer to image and/or audio information at a specific instant or over a period of time within the playback duration of the video. A consumption of a portion of a video may refer to an action prompted by a user relating to viewing the portion of the video. A transformation of a portion of a video may refer to an action prompted by a user relating to changes in the portion of the video.
  • In some implementations, a user interaction with a portion of the video that satisfies one or more criteria for identifying a moment of interest within a video may include viewing of the portion of the video. One or more criteria may require one or more particular viewings of the portion of the video. A particular viewing of the portion of the video may include a single viewing of the portion of the video. For example, one or more criteria may require a user interaction with the portion of the video to view the portion of the video at least once. The portion of the video may be required to be viewed within a certain duration of time. For example, the portion of the video may be required to be viewed within a certain duration of time after the video has been captured, stored, received, and/or accessed.
  • A particular viewing of the portion of the video may include repeated viewing of the portion of the video. For example, one or more criteria may require a user interaction with the portion of the video to view the portion of the video a certain number of times. The portion of the video may be required to be repeatedly viewed within a certain duration of time. For example, the portion of the video may be required to be repeatedly viewed within a certain duration of time after the video has been captured, stored, received, and/or accessed. The portion of the video may be required to be repeatedly viewed without viewing other portions of the video and/or other videos. For example, the portion of the video may be required to be viewed twice in a row.
  • A particular viewing of the portion of the video may include pausing the viewing during the portion of the video. For example, one or more criteria may require a user interaction with the portion of the video to pause the viewing during the portion of the video for at least an instance or for a certain duration of time. A certain duration of time may include a maximum duration after which the one or more criteria may not be satisfied. For example, a criteria may require a portion of a video to be paused for more than 5 seconds but not more than 2 minutes. Setting a maximum duration may prevent a pause of a portion of the video from satisfying one or more criteria when a user is otherwise inactive with the portion of the video (e.g., a user has stepped away from a device showing the portion of the video after pausing the video, etc.). A maximum duration may not apply if a user is active with the portion of the video while the portion of the video is paused (e.g., a user is editing a portion of the video and/or obtaining information about the portion of the video while the portion of the video is paused, etc.).
  • A particular viewing of the portion of the video may include viewing the portion of the video at a certain speed. For example, one or more criteria may require a user interaction with the portion of the video to change the speed at which the portion of the video is viewed. One or more criteria may require the viewing speed of the portion of the video to be slowed (e.g., from 1× speed to 0.5× speed, from 4× speed to 1× speed, etc.). One or more criteria may require the viewing speed of the portion of the video to be increased (e.g., from 1× speed to 2× speed, from 0.25× to 2× speed, etc.). One or more criteria may require the viewing speed of the portion of the video to be increased and slowed.
  • A particular viewing of the portion of the video may include seeking and playing the portion of the video. For example, one or more criteria may require a user interaction with the portion of the video to fast forward and/or reverse to the portion of the video and view the portion of the video. One or more criteria may require the fast forward and/or reverse to be accomplished at a certain speed, under a certain speed, or over a certain speed.
  • A particular viewing of the portion of the video may include playing the portion of the video based on a suggestion. For example, one or more criteria may require a user interaction with the portion of the video to view the portion of the video based on a suggestion to view the portion from another user, from a video application (e.g., automatic suggestion of highlights from visual/metadata analysis, etc.), and/or from other suggestion sources. Other particular views of the portion of the video are contemplated.
  • In some implementations, a user interaction with a portion of the video that satisfies one or more criteria for identifying a moment of interest within a video may include selection of the portion of the video. One or more criteria may require one or more particular selections of the portion of the video. A particular selection of the portion of the video may include a single selection of the portion of the video. For example, one or more criteria may require a user interaction with the portion of the video to select the portion of the video at least once (e.g., selecting a portion of a video between 0.5 seconds and 5 seconds time interval, etc.).
  • A particular selection of the portion of the video may include repeated selection of the portion of the video. For example, one or more criteria may require a user interaction with the portion of the video to select the portion of the video a certain number of times. The portion of the video may be required to be repeatedly selected within a certain duration of time. For example, the portion of the video may be required to be repeatedly selected within a certain duration of time after the video has been captured, stored, received, and/or accessed.
  • A particular selection of the portion of the video may include using a pointer (e.g., a cursor, etc.) to hover over the portion of the video. For example, one or more criteria may require a user interaction with the portion of the video to hover a pointer over the portion of the video for a certain duration time. A certain duration of time may include a maximum duration after which the one or more criteria may not be satisfied. For example, one or more criteria may require hovering over a portion of a video for more than 1 seconds but not more than 1 minute. Setting a maximum duration may prevent a hovering over a portion of the video from satisfying one or more criteria when a user is otherwise inactive with the portion of the video (e.g., a user has stepped away from a device showing the portion of the video after leaving the cursor hovering over the portion of the video, etc.). A maximum duration may not apply if a user is active with the portion of the video while hovering over the portion of the video (e.g., a user is editing a portion of the video and/or obtaining information about the portion of the video while hovering over the portion of the video, etc.). Other particular selections of the portion of the video are contemplated.
  • In some implementations, a user interaction with a portion of the video that satisfies one or more criteria for identifying a moment of interest within a video may include zooming on the portion of the video. One or more criteria may require one or more particular zooms on the portion of the video. A particular zoom on the portion of the video may include increasing or decreasing a zoom level at which the portion of the video is viewed. For example, one or more criteria may require a user interaction with the portion of the video to view the portion of the video at 2× zoom level. One or more criteria may require a user interaction with the portion of the video to change the zoom level once or multiple times (e.g., from 1× zoom to 2× zoom to 0.5× zoom, etc.). Other particular zooms of the portion of the video are contemplated.
  • In some implementations, a user interaction with a portion of the video that satisfies one or more criteria for identifying a moment of interest within a video may include sharing of the portion of the video. One or more criteria may require one or more particular sharing of the portion of the video. A particular sharing of the portion of the video may include sharing one or images from the portion of the video and/or sharing one or more video clips from the portion of the video. For example, one or more criteria may require a user interaction with the portion of the video to share a single image from the portion of the video, share multiple images from the portion of the video, share a video clip corresponding to the portion of the video, and/or share video clips corresponding to the portion of the video. Other particular sharing of the portion of the video are contemplated.
  • In some implementations, a user interaction with a portion of the video that satisfies one or more criteria for identifying a moment of interest within a video may include extraction of an image or a video clip from the portion of the video. One or more criteria may require one or more particular extractions of the portion of the video. A particular extraction of the portion of the video may include extracting one or images from the portion of the video and/or extracting one or more video clips from the portion of the video. For example, one or more criteria may require a user interaction with the portion of the video to extract a single image from the portion of the video, extract multiple images from the portion of the video, extract a video clip corresponding to the portion of the video, and/or extract one or more video clips corresponding to the portion of the video. Other particular extractions of the portion of the video are contemplated.
  • In some implementations, a user interaction with a portion of the video that satisfies one or more criteria for identifying a moment of interest within a video may include a visual modification of the portion of the video. One or more criteria may require one or more particular visual modifications of the portion of the video. A particular visual modification of the video may include one or more changes in visuals to one or more images and/or one or more video clips from the portion of the video. For example, one or more criteria may require a user interaction with the portion of the video to transform one or more visual characteristics of a single image from the portion of the video, multiple images from the portion of the video, a video clip corresponding to the portion of the video, and/or video clips corresponding to the portion of the video. One or more transformations of visual characteristics may include cropping, color changes, brightness changes, contrast changes, image warping, image blurring, aspect ratio changes, resolution changes, framerate changes, transition effects, special effects, addition of visuals (e.g., other image, video, text, color, etc.), removal of visuals, and/or other changes in visuals. Other particular visual modifications of the portion of the video are contemplated.
  • In some implementations, a user interaction with a portion of the video that satisfies one or more criteria for identifying a moment of interest within a video may include a timing modification of the portion of the video. One or more criteria may require one or more particular timing modifications of the portion of the video. A particular timing modification of the video may include one or more changes in timing of one or more parts of the video. For example, one or more criteria may require a user interaction with the portion of the video to change a length and/or a speed of a portion of the video. A change in length and/or speed of a portion of a video may be local (specific to a part of the portion of the video) or global (applicable to the entire portion of the video). As non-limiting examples, a change in length of a portion of a video may include increasing the length of a portion of a video, decreasing the length of a portion of a video, trimming a portion of a video, and/or other changes in length of a portion of a video. As non-limiting example, a change in speed of a portion of a video may include a slow motion effect, a fast motion effect, a time freeze effect, a speed ramp effect, and/or other changes in speed of a portion of a video. A slow motion effect may decrease the speed with which a portion of a video may be presented during playback. A fast motion effect may increase the speed with which a portion of a video may be presented during playback. A slow motion effect and/or a fast motion effect may be linear or dynamic. A time freeze effect may, for a duration of time, freeze a portion of a video. A speed ramp effect may decrease the speed with which a part of a portion of a video may be presented during playback and increase the speed with which another part of the portion of a video may be presented during playback. Other particular timing modifications of the portion of the video are contemplated.
  • In some implementations, a user interaction with a portion of the video that satisfies one or more criteria for identifying a moment of interest within a video may include an audio modification of the portion of the video. One or more criteria may require one or more particular audio modifications of the portion of the video. A particular audio modification of the video may include one or more changes in audio within the portion of the video. For example, one or more criteria may require a user interaction with the portion of the video to transform one or more audio characteristics of one or more audio within the portion of the video. One or more transformations of audio characteristics may include changes to a beat, a tempo, a rhythm, a volume, a frequency, a start, an end, and/or other audio characteristics. Other particular audio modifications of the portion of the video are contemplated.
  • In some implementations, a user interaction with a portion of the video that satisfies one or more criteria for identifying a moment of interest within a video may include one or more of the user interactions described above and/or other user interactions of the portion of the video. For example, one or more criteria may require one or more particular visual modifications of the portion of the video and one or more sharing of the portion of the video.
  • Interaction information component 106 may be configured to receive interaction information. Interaction information may indicate one or more user interactions with a portion of a video. Interaction information may be received at a time or over a period of time. Interaction information may be received based on one or more user input. One or more user input may be received via one or more graphical user interfaces of one or more video applications. A video application may refer to one or more software, one or more software running on one or more hardware, and/or other applications operating to present video on a display. As a non-limiting example, a video application may include one or more of video viewer, video editor, and/or other video applications. As a non-limiting example, a video application may run on one or more of a mobile device, a desktop device, a camera, and/or other hardware.
  • For example, FIG. 4 illustrates an example of graphical user interface 400 of a video application. Graphical user interface 400 may include one or more of playback panel 401, control panel 402, storyboard panel 403, and/or other panels. Individual panels of graphical user interface 400 may present different information. For example, playback panel 401 may present playback of one or more portion of one or more videos. Control panel 402 may present playback options for playback and/or transformation options for transformation of one or more portions of one or more videos. For example, in FIG. 4, control panel 402 may present playback options and/or transformation options for one or more portions of video A 411, video B 412, and/or video C 413.
  • Storyboard panel 403 may present information related to one or more videos. For example, storyboard panel 403 may present information relating to titles, durations, dates, highlights, and/or other information relating to videos. For example, in FIG. 4, storyboard panel 403 may present information related to video A 411, video B 412, and video C 413. Storyboard panel 403 may present one or more videos based on the order and/or duration of the videos. For example, video A 411, video B 412, and video C 413 may be included in a video presentation. Storyboard panel 403 may present video A 411, video B 412, and video C 413 in the order in which they appear in the video presentation. Storyboard panel 403 may allow a user to change the order of videos. Storyboard panel 403 may present videos based on the durations of videos. For example, the sizes of video A 411, video B 412, and video C 413 in storyboard panel 403 may correspond to the durations of video A 411, video B 412, and video C 413 in the video presentation. Other appearances and types of graphical user interface/panels are contemplated.
  • Comparison component 108 may be configured to compare interaction information for a portion of a video with one or more criteria for identifying a moment of interest within the video. Based on the comparison, comparison component 108 may be configured to determine whether the interaction information for the portion of the video satisfies one or more criteria for identifying the moment of interest within the video.
  • Association component 110 may be configured to, responsive to interaction information for a portion of the video indicating one or more user interactions with the portion of the video satisfying one or more criteria, associate a moment in the video corresponding to the portion of the video with the moment of interest. A moment in the video corresponding to the portion of the video may correspond to a point in time within a video or a duration of time within the video. A point in time within a video may correspond to a beginning of the portion of the video, a center of the portion of the video, an end of the portion of the video, or other instant in time within the portion of the video.
  • An association of a moment in a video with a moment of interest may be presented via a graphical user interface of a video application. For example, one or more associations of moments in videos with moments of interest may be shown in storyboard panel 403 of FIG. 4. FIGS. 5A-5B illustrate examples of associations of moments in videos with moments of interest shown in storyboard panel 403. Associations of moments in videos with moments of interest may be shown as circles (e.g., highlight A 501, highlight B 502). Other displays of associations of moments in videos with moments of interests are contemplated.
  • In FIG. 5A, a moment within video A 411 may be associated with a moment of interest (highlight A 501) and a moment within video B 412 may be associated with a moment of interest (highlight B 502). Highlight A 501 and highlight B 502 may be associated by association component 110 based on comparison component 108 determining that interaction information for a portion of video A 411 corresponding to highlight A 501 satisfied one or more criteria and that interaction information for a portion of video B 412 corresponding to highlight B 502 satisfied one or more criteria. Video A 411, video B 412, and/or video C 413 may include other associations.
  • For example, interaction information for a portion of video A 411 may indicate a user interaction with the portion of video A 411 to save a still image (e.g., via image extraction) from video A. Based on this interaction information satisfying one or more criteria, association component 110 may associate a point in time corresponding to the still image with a moment of interest as highlight A 501. Interaction information for a portion of video B 412 may indicate a user interaction with the portion of video B 412 to select the portion of video B 412 (e.g., selecting a portion of video B 412 between 0.5 seconds and 5 seconds time interval). Based on this interaction information satisfying one or more criteria, association component 110 may associate a point in time corresponding to a center of the portion of video B 412 (e.g., 2.75 second mark of video B 412) with a moment of interest as highlight B 502.
  • In some implementations, a graphical user interface of a video application may display highlight moments based on user tagging. For example, graphical user interface 400 may display highlight moments based on direct markings of moments corresponding to portions of videos as moments of interest, user comments corresponding to portions of videos, user labeling corresponding to portions of videos, and/or other user tags. User tagging may be accomplished during capture of a video or after capture of a video.
  • In some implementations, association component 110 may be configured to prompt a user's approval of associating a moment in a video corresponding to a portion of the video with a moment of interest. Association component 110 may allow a user to individually approve association of individual moments in the video with moments of interest and/or approval some or all of associations at once. For example, association component 110 may present to the user the associations made in a chronological order and/or based on types of moments of interest. Association component 110 may allow a user to accept all associations by saving a copy of a video with new associations. Responsive to reception of the user's approval, association component 110 may be configured to confirm the association of the moment in the video corresponding to the portion of the video with the moment of interest. Responsive to refusal of the user approval, association component 110 may be configured to remove the association(s).
  • For example, in FIG. 5B, storyboard panel 403 may display highlight A 501, highlight B 502, and highlight C 503. Associations of highlight A 501 and highlight B 502 may have been confirmed by a user. Association of highlight C 503 may not have been confirmed by a user. Confirmed associations and unconfirmed associations may be displayed differently. For example, confirmed associations and unconfirmed associations may be shown in different colors (e.g., yellow for confirmed association and blue for unconfirmed association, etc.). Confirmed associations and unconfirmed associations may be shown in different shapes (e.g., circle for confirmed association and square for unconfirmed association, solid circle for confirmed association and dotted circle for unconfirmed association, etc.). Other ways of distinguishing confirmed associations from unconfirmed associations are contemplated.
  • In some implementations, associations of moments in a video with moments of interest may be shown in other parts of graphical user interface 400. For example, graphical user interface 400 may include video list panel 601 (shown in FIG. 6). Video list panel 601 may present one or more videos for inclusion in a video presentation. For example, video list panel 601 may allow a user to select video A 411, video B 412, video C 413, and/or other video 602 for inclusion in a video presentation shown in storyboard panel 403. Video list panel 601 may display associations of a moment in a video with a moment of interest. For example, in FIG. 6, video list panel 601 may display highlight A 501 in video A 411, highlight B 502 in video B 412, and highlight C 503 in video C 413.
  • In some implementations, associate component 110 may be configured to indicate a type of moment of interest. A type of moment of interest may indicate criteria satisfied by the interaction information or a classification of criteria satisfied by the interaction information. For example, association component 110 may tag a highlight moment with information that indicates the criteria satisfied by the interaction information. Association component 110 may tag a highlight moment with information based on classification of criteria satisfied by the interaction information. In some implementations, the type of moment of interest may be shown on a graphical user interface of a video application.
  • In some implementations, association component 110 may be further configured to, responsive to interaction information for a portion of the video indicating one or more user interactions with the portion of the video satisfying one or more criteria, associate a moment in a high resolution video corresponding to the portion of the video with the moment of interest. The high resolution video may be characterized by a higher resolution than the video. The high resolution video may be characterized by a higher framerate than the video.
  • For example, camera 32 (shown in FIG. 3) may include a video and a high resolution video. The high resolution video may have been captured by camera 32 and the video (having a lower resolution and/or lower framerate than the high resolution video) may have been generated from the high resolution video. Camera 32 may access the video and obtain one or more criteria for identifying a moment of interest within the video. A user may operate camera 32 to interact with the video and camera 32 may compare interaction information for a portion of the video with one or more criteria. Responsive to the interaction information satisfying one or more criteria, camera 32 may associate a moment in the video corresponding to the portion of the video with the moment of interest and associate a moment in the high resolution video corresponding to the portion of the video with the moment of interest.
  • As another example, camera 32 may include a video and a high resolution video, and camera 32 may send the video to mobile device 33. Mobile device 33 may access the video and obtain one or more criteria for identifying a moment of interest within the video. A user may operate mobile device 33 to interact with the video and mobile device 33 may compare interaction information for a portion of the video with one or more criteria. Responsive to the interaction information satisfying one or more criteria, mobile device 33 may associate a moment in the video corresponding to the portion of the video with the moment of interest and associate a moment in the high resolution video corresponding to the portion of the video with the moment of interest. Mobile device 33 may indirectly associate the moment in the high resolution video with the moment of interest. For example, mobile device 33 may send one or more commands to camera 32 to associate the moment in the high resolution video with the moment of interest. Mobile device 33 may send information relating to association of the moment in the video with the moment of interest to camera 32. Camera 32 may use the information relating to association of the moment in the video with the moment of interest to associate the moment in the high resolution video with the moment of interest.
  • While the present disclosure may be directed to videos, one or more other implementations of the system may be configured for other types media content. Other types of media content may include one or more of audio content (e.g., music, podcasts, audio books, and/or other audio content), multimedia presentations, photos, slideshows, and/or other media content.
  • Although processor 11 and electronic storage 12 are shown to be connected to an interface 13 in FIG. 1, any communication medium may be used to facilitate interaction between any components of system 10. One or more components of system 10 may communicate with each other through hard-wired communication, wireless communication, or both. For example, one or more components of system 10 may communicate with each other through a network. For example, processor 11 may wirelessly communicate with electronic storage 12. By way of non-limiting example, wireless communication may include one or more of radio communication, Bluetooth communication, Wi-Fi communication, cellular communication, infrared communication, or other wireless communication. Other types of communications are contemplated by the present disclosure.
  • Although processor 11 is shown in FIG. 1 as a single entity, this is for illustrative purposes only. In some implementations, processor 11 may comprise a plurality of processing units. These processing units may be physically located within the same device, or processor 11 may represent processing functionality of a plurality of devices operating in coordination. Processor 11 may be configured to execute one or more components by software; hardware; firmware; some combination of software, hardware, and/or firmware; and/or other mechanisms for configuring processing capabilities on processor 11.
  • It should be appreciated that although computer components are illustrated in FIG. 1 as being co-located within a single processing unit, in implementations in which processor 11 comprises multiple processing units, one or more of computer program components may be located remotely from the other computer program components.
  • The description of the functionality provided by the different computer program components described herein is for illustrative purposes, and is not intended to be limiting, as any of computer program components may provide more or less functionality than is described. For example, one or more of computer program components 102, 104, 106, 108, and/or 110 may be eliminated, and some or all of its functionality may be provided by other computer program components. As another example, processor 11 may be configured to execute one or more additional computer program components that may perform some or all of the functionality attributed to one or more of computer program components 102, 104, 106, 108, and/or 110 described herein.
  • The electronic storage media of electronic storage 12 may be provided integrally (i.e., substantially non-removable) with one or more components of system 10 and/or removable storage that is connectable to one or more components of system 10 via, for example, a port (e.g., a USB port, a Firewire port, etc.) or a drive (e.g., a disk drive, etc.). Electronic storage 12 may include one or more of optically readable storage media (e.g., optical disks, etc.), magnetically readable storage media (e.g., magnetic tape, magnetic hard drive, floppy drive, etc.), electrical charge-based storage media (e.g., EPROM, EEPROM, RAM, etc.), solid-state storage media (e.g., flash drive, etc.), and/or other electronically readable storage media. Electronic storage 12 may be a separate component within system 10, or electronic storage 12 may be provided integrally with one or more other components of system 10 (e.g., processor 11). Although electronic storage 12 is shown in FIG. 1 as a single entity, this is for illustrative purposes only. In some implementations, electronic storage 12 may comprise a plurality of storage units. These storage units may be physically located within the same device, or electronic storage 12 may represent storage functionality of a plurality of devices operating in coordination.
  • FIG. 2 illustrates method 200 for generating highlights for a video. The operations of method 200 presented below are intended to be illustrative. In some implementations, method 200 may be accomplished with one or more additional operations not described, and/or without one or more of the operations discussed. In some implementations, two or more of the operations may occur substantially simultaneously.
  • In some implementations, method 200 may be implemented in one or more processing devices (e.g., a digital processor, an analog processor, a digital circuit designed to process information, a central processing unit, a graphics processing unit, a microcontroller, an analog circuit designed to process information, a state machine, and/or other mechanisms for electronically processing information). The one or more processing devices may include one or more devices executing some or all of the operations of method 200 in response to instructions stored electronically on one or more electronic storage mediums. The one or more processing devices may include one or more devices configured through hardware, firmware, and/or software to be specifically designed for execution of one or more of the operations of method 200.
  • Referring to FIG. 2 and method 200, at operation 201, a video may be accessed. A video may be accessed from electronic storage and/or from other locations. In some implementations, operation 201 may be performed by a processor component the same as or similar to access component 102 (shown in FIG. 1 and described herein).
  • At operation 202, criteria for identifying a moment of interest within a video may be obtained. A moment of interest within a video may be identified based on a user interaction with a portion of the video. A user interaction with a portion of a video may include a consumption of the portion of the video, a transformation of the portion of the video, and/or the other user interactions with the portion of the video. In some implementations, operation 202 may be performed by a processor component the same as or similar to criteria component 104 (shown in FIG. 1 and described herein).
  • At operation 203, interaction information indicating the user interaction with the portion of the video may be received. Interaction information may be received at a time or over a period of time. In some implementations, operation 203 may be performed by a processor component the same as or similar to interaction information component 106 (shown in FIG. 1 and described herein).
  • At operation 204, the interaction information for the portion of the video may be compared with the criteria. In some implementations, operation 204 may be performed by a processor component the same as or similar to comparison component 108 (shown in FIG. 1 and described herein).
  • At operation 205, responsive to the interaction information for the portion of the video indicating the user interaction with the portion of the video satisfying the criteria, a moment in the video corresponding to the portion of the video may be associated with the moment of interest. In some implementations, operation 205 may be performed by a processor component the same as or similar to association component 110 (shown in FIG. 1 and described herein).
  • Although the system(s) and/or method(s) of this disclosure have been described in detail for the purpose of illustration based on what is currently considered to be the most practical and preferred implementations, it is to be understood that such detail is solely for that purpose and that the disclosure is not limited to the disclosed implementations, but, on the contrary, is intended to cover modifications and equivalent arrangements that are within the spirit and scope of the appended claims. For example, it is to be understood that the present disclosure contemplates that, to the extent possible, one or more features of any implementation can be combined with one or more features of any other implementation.

Claims (20)

What is claimed is:
1. A system for generating highlights for a video, the system comprising:
one or more physical processors configured by machine-readable instructions to:
access the video;
obtain a criterion for identifying a moment of interest within the video based on a user interaction with a portion of the video, the criterion being satisfied based on the user interaction with the portion of the video including a user's extraction of visual content from the portion of the video;
receive interaction information indicating the user interaction with the portion of the video;
compare the interaction information for the portion of the video with the criterion; and
responsive to the interaction information for the portion of the video indicating the user interaction with the portion of the video satisfying the criterion, associate a moment in the video corresponding to the portion of the video with the moment of interest.
2. The system of claim 1, wherein the user interaction with the portion of the video includes a consumption of the portion of the video or a transformation of the portion of the video.
3. The system of claim 1, wherein the one or more physical processors are further configured to, responsive to the interaction information for the portion of the video indicating the user interaction with the portion of the video satisfying the criterion, associate a moment in a high resolution video corresponding to the portion of the video with the moment of interest, the high resolution video characterized by a higher resolution than the video.
4. The system of claim 1, wherein the user's extraction of the visual content from the portion of the video that satisfies the criterion includes the user extracting a single image from the portion of the video.
5. The system of claim 1, wherein the user's extraction of the visual content from the portion of the video that satisfies the criterion includes the user extracting multiple images from the portion of the video.
6. The system of claim 1, wherein the user's extraction of the visual content from the portion of the video that satisfies the criterion includes the user extracting a single video clip from the portion of the video.
7. The system of claim 1, wherein the user's extraction of the visual content from the portion of the video that satisfies the criterion includes the user extracting multiple video clip from the portion of the video.
8. The system of claim 1, wherein the criterion further requires the user interaction with the portion of the video to further include the user's transformation of the visual content extracted from the portion of the video.
9. The system of claim 8, wherein the user's transformation of the visual content includes visual modification or timing modification of the visual content.
10. The system of claim 1, wherein the criterion further requires the user interaction with the portion of the video to further include the user's sharing of the visual content extracted from the portion of the video.
11. A method for generating highlights for a video, the method comprising:
accessing the video;
obtaining a criterion for identifying a moment of interest within the video based on a user interaction with a portion of the video, the criterion being satisfied based on the user interaction with the portion of the video including a user's extraction of visual content from the portion of the video;
receiving interaction information indicating the user interaction with the portion of the video;
comparing the interaction information for the portion of the video with the criterion; and
responsive to the interaction information for the portion of the video indicating the user interaction with the portion of the video satisfying the criterion, associating a moment in the video corresponding to the portion of the video with the moment of interest.
12. The method of claim 11, wherein the user interaction with the portion of the video includes a consumption of the portion of the video or a transformation of the portion of the video.
13. The method of claim 11, further comprising, responsive to the interaction information for the portion of the video indicating the user interaction with the portion of the video satisfying the criterion, associating a moment in a high resolution video corresponding to the portion of the video with the moment of interest, the high resolution video characterized by a higher resolution than the video.
14. The method of claim 11, wherein the user's extraction of the visual content from the portion of the video that satisfies the criterion includes the user extracting a single image from the portion of the video.
15. The method of claim 11, wherein the user's extraction of the visual content from the portion of the video that satisfies the criterion includes the user extracting multiple images from the portion of the video.
16. The method of claim 11, wherein the user's extraction of the visual content from the portion of the video that satisfies the criterion includes the user extracting a single video clip from the portion of the video.
17. The method of claim 11, wherein the user's extraction of the visual content from the portion of the video that satisfies the criterion includes the user extracting multiple video clip from the portion of the video.
18. The method of claim 11, wherein the criterion further requires the user interaction with the portion of the video to further include the user's transformation of the visual content extracted from the portion of the video.
19. The method of claim 18, wherein the user's transformation of the visual content includes visual modification or timing modification of the visual content.
20. The method of claim 11, wherein the criterion further requires the user interaction with the portion of the video to further include the user's sharing of the visual content extracted from the portion of the video.
US16/296,574 2016-05-09 2019-03-08 Systems and methods for generating highlights for a video Abandoned US20190206445A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US16/296,574 US20190206445A1 (en) 2016-05-09 2019-03-08 Systems and methods for generating highlights for a video

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US15/150,400 US10229719B1 (en) 2016-05-09 2016-05-09 Systems and methods for generating highlights for a video
US16/296,574 US20190206445A1 (en) 2016-05-09 2019-03-08 Systems and methods for generating highlights for a video

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US15/150,400 Continuation US10229719B1 (en) 2016-05-09 2016-05-09 Systems and methods for generating highlights for a video

Publications (1)

Publication Number Publication Date
US20190206445A1 true US20190206445A1 (en) 2019-07-04

Family

ID=65633133

Family Applications (2)

Application Number Title Priority Date Filing Date
US15/150,400 Active 2036-05-27 US10229719B1 (en) 2016-05-09 2016-05-09 Systems and methods for generating highlights for a video
US16/296,574 Abandoned US20190206445A1 (en) 2016-05-09 2019-03-08 Systems and methods for generating highlights for a video

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US15/150,400 Active 2036-05-27 US10229719B1 (en) 2016-05-09 2016-05-09 Systems and methods for generating highlights for a video

Country Status (1)

Country Link
US (2) US10229719B1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112261491B (en) * 2020-12-22 2021-04-16 北京达佳互联信息技术有限公司 Video time sequence marking method and device, electronic equipment and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090125559A1 (en) * 2007-11-02 2009-05-14 Fujifilm Corporation Method, apparatus and system for creating interest information
US20090157605A1 (en) * 2004-11-23 2009-06-18 Koninklijke Philips Electronics, N.V. Method and apparatus for managing files
US20130326406A1 (en) * 2012-06-01 2013-12-05 Yahoo! Inc. Personalized content from indexed archives
US20140282661A1 (en) * 2013-03-14 2014-09-18 Google Inc. Determining Interest Levels in Videos
US20160286235A1 (en) * 2013-12-06 2016-09-29 Huawei Technologies Co., Ltd. Image Decoding Apparatus, Image Coding Apparatus, and Coded Data Transformation Apparatus

Family Cites Families (74)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH09181966A (en) 1995-12-22 1997-07-11 Olympus Optical Co Ltd Image processing method and device
US6633685B1 (en) 1998-08-05 2003-10-14 Canon Kabushiki Kaisha Method, apparatus, and storage media for image processing
US7047201B2 (en) 2001-05-04 2006-05-16 Ssi Corporation Real-time control of playback rates in presentations
US7970240B1 (en) 2001-12-17 2011-06-28 Google Inc. Method and apparatus for archiving and visualizing digital images
KR100866790B1 (en) 2002-06-29 2008-11-04 삼성전자주식회사 Method and apparatus for moving focus for navigation in interactive mode
JP4117616B2 (en) 2003-07-28 2008-07-16 ソニー株式会社 Editing system, control method thereof and editing apparatus
US20050108031A1 (en) 2003-11-17 2005-05-19 Grosvenor Edwin S. Method and system for transmitting, selling and brokering educational content in streamed video form
US8196168B1 (en) * 2003-12-10 2012-06-05 Time Warner, Inc. Method and apparatus for exchanging preferences for replaying a program on a personal video recorder
JP4232100B2 (en) * 2003-12-26 2009-03-04 ソニー株式会社 Playback apparatus and content evaluation method
JP3915988B2 (en) 2004-02-24 2007-05-16 ソニー株式会社 Information processing apparatus and method, recording medium, and program
JP4125252B2 (en) 2004-03-02 2008-07-30 株式会社東芝 Image generation apparatus, image generation method, and image generation program
US7512886B1 (en) 2004-04-15 2009-03-31 Magix Ag System and method of automatically aligning video scenes with an audio track
US8953908B2 (en) 2004-06-22 2015-02-10 Digimarc Corporation Metadata management and generation using perceptual features
JP4707368B2 (en) 2004-06-25 2011-06-22 雅貴 ▲吉▼良 Stereoscopic image creation method and apparatus
JP2006053694A (en) 2004-08-10 2006-02-23 Riyuukoku Univ Space simulator, space simulation method, space simulation program and recording medium
US20060080286A1 (en) 2004-08-31 2006-04-13 Flashpoint Technology, Inc. System and method for storing and accessing images based on position data associated therewith
US8774560B2 (en) 2005-01-11 2014-07-08 University Of Central Florida Research Foundation, Inc. System for manipulation, modification and editing of images via remote device
US7735101B2 (en) 2006-03-28 2010-06-08 Cisco Technology, Inc. System allowing users to embed comments at specific points in time into media presentation
JP4565115B2 (en) 2006-08-30 2010-10-20 独立行政法人産業技術総合研究所 Multifocal imaging device
US20080123976A1 (en) 2006-09-22 2008-05-29 Reuters Limited Remote Picture Editing
US7885426B2 (en) 2006-09-26 2011-02-08 Fuji Xerox Co., Ltd. Method and system for assessing copyright fees based on the content being copied
JP5007563B2 (en) 2006-12-28 2012-08-22 ソニー株式会社 Music editing apparatus and method, and program
US20080183608A1 (en) 2007-01-26 2008-07-31 Andrew Gavin Payment system and method for web-based video editing system
US20090027499A1 (en) 2007-07-23 2009-01-29 David Henry Nicholl Portable multi-media surveillance device and method for delivering surveilled information
JP2009053748A (en) 2007-08-23 2009-03-12 Nikon Corp Image processing apparatus, image processing program, and camera
US20090077459A1 (en) * 2007-09-19 2009-03-19 Morris Robert P Method And System For Presenting A Hotspot In A Hypervideo Stream
WO2009040538A1 (en) 2007-09-25 2009-04-02 British Telecommunications Public Limited Company Multimedia content assembling for viral marketing purposes
WO2009072466A1 (en) 2007-12-03 2009-06-11 National University Corporation Hokkaido University Image classification device and image classification program
JP2009230822A (en) * 2008-03-24 2009-10-08 Sony Corp Device, and method for editing content and program
US8520979B2 (en) 2008-08-19 2013-08-27 Digimarc Corporation Methods and systems for content processing
WO2010034063A1 (en) 2008-09-25 2010-04-01 Igruuv Pty Ltd Video and audio content system
KR101499498B1 (en) 2008-10-08 2015-03-06 삼성전자주식회사 Apparatus and method for ultra-high resoultion video processing
US20100161720A1 (en) 2008-12-23 2010-06-24 Palm, Inc. System and method for providing content to a mobile device
US8641621B2 (en) 2009-02-17 2014-02-04 Inneroptic Technology, Inc. Systems, methods, apparatuses, and computer-readable media for image management in image-guided medical procedures
US8412729B2 (en) 2009-04-22 2013-04-02 Genarts, Inc. Sharing of presets for visual effects or other computer-implemented effects
US8516101B2 (en) 2009-06-15 2013-08-20 Qualcomm Incorporated Resource management for a wireless device
JP5549230B2 (en) 2010-01-13 2014-07-16 株式会社リコー Ranging device, ranging module, and imaging device using the same
US20110206351A1 (en) 2010-02-25 2011-08-25 Tal Givoli Video processing system and a method for editing a video asset
JP5565001B2 (en) 2010-03-04 2014-08-06 株式会社Jvcケンウッド Stereoscopic imaging device, stereoscopic video processing device, and stereoscopic video imaging method
JP4787905B1 (en) 2010-03-30 2011-10-05 富士フイルム株式会社 Image processing apparatus and method, and program
US8606073B2 (en) 2010-05-12 2013-12-10 Woodman Labs, Inc. Broadcast management system
JP4865068B1 (en) 2010-07-30 2012-02-01 株式会社東芝 Recording / playback device, tag list generation method for recording / playback device, and control device for recording / playback device
US8849879B2 (en) 2010-07-30 2014-09-30 Avaya Inc. System and method for aggregating and presenting tags
US20140192238A1 (en) 2010-10-24 2014-07-10 Linx Computational Imaging Ltd. System and Method for Imaging and Image Processing
US8705866B2 (en) 2010-12-07 2014-04-22 Sony Corporation Region description and modeling for image subscene recognition
WO2012086120A1 (en) 2010-12-24 2012-06-28 パナソニック株式会社 Image processing apparatus, image pickup apparatus, image processing method, and program
WO2012109568A1 (en) 2011-02-11 2012-08-16 Packetvideo Corporation System and method for using an application on a mobile device to transfer internet media content
US9997196B2 (en) 2011-02-16 2018-06-12 Apple Inc. Retiming media presentations
US8954386B2 (en) 2011-03-22 2015-02-10 Microsoft Corporation Locally editing a remotely stored image
US20120283574A1 (en) 2011-05-06 2012-11-08 Park Sun Young Diagnosis Support System Providing Guidance to a User by Automated Retrieval of Similar Cancer Images with User Feedback
KR101797041B1 (en) 2012-01-17 2017-12-13 삼성전자주식회사 Digital imaging processing apparatus and controlling method thereof
US9189876B2 (en) 2012-03-06 2015-11-17 Apple Inc. Fanning user interface controls for a media editing application
US9041727B2 (en) 2012-03-06 2015-05-26 Apple Inc. User interface tools for selectively applying effects to image
US20130330019A1 (en) 2012-06-08 2013-12-12 Samsung Electronics Co., Ltd. Arrangement of image thumbnails in social image gallery
CN102768676B (en) 2012-06-14 2014-03-12 腾讯科技(深圳)有限公司 Method and device for processing file with unknown format
US9342376B2 (en) 2012-06-27 2016-05-17 Intel Corporation Method, system, and device for dynamic energy efficient job scheduling in a cloud computing environment
US20140152762A1 (en) 2012-06-28 2014-06-05 Nokia Corporation Method, apparatus and computer program product for processing media content
US20150156247A1 (en) 2012-09-13 2015-06-04 Google Inc. Client-Side Bulk Uploader
US8990328B1 (en) 2012-10-02 2015-03-24 Amazon Technologies, Inc. Facilitating media streaming with social interaction
JP2014106637A (en) 2012-11-26 2014-06-09 Sony Corp Information processor, method and program
US10447826B2 (en) * 2013-03-14 2019-10-15 Google Llc Detecting user interest in presented media items by observing volume change events
US20140280590A1 (en) * 2013-03-15 2014-09-18 Nevada Funding Group Inc. Systems, methods and apparatus for creating, managing and presenting a social contacts list
KR102161230B1 (en) 2013-05-28 2020-09-29 삼성전자주식회사 Method and apparatus for user interface for multimedia content search
US9542488B2 (en) 2013-08-02 2017-01-10 Google Inc. Associating audio tracks with video content
US20150071547A1 (en) 2013-09-09 2015-03-12 Apple Inc. Automated Selection Of Keeper Images From A Burst Photo Captured Set
WO2015123572A1 (en) * 2014-02-14 2015-08-20 Pluto Inc. Methods and systems for generating and providing program guides and content
US20150244972A1 (en) * 2014-02-27 2015-08-27 United Video Properties Inc. Methods and systems for determining lengths of time for retaining media assets
JP6438046B2 (en) 2014-04-04 2018-12-12 レッド.コム,エルエルシー Video camera with capture mode
US20160026874A1 (en) 2014-07-23 2016-01-28 Gopro, Inc. Activity identification in video
US9418283B1 (en) 2014-08-20 2016-08-16 Amazon Technologies, Inc. Image processing using multiple aspect ratios
US20160094601A1 (en) 2014-09-30 2016-03-31 The Nielsen Company (Us), Llc Methods and apparatus to measure exposure to streaming media
US20160189752A1 (en) 2014-12-30 2016-06-30 Yaron Galant Constrained system real-time capture and editing of video
US20160226804A1 (en) * 2015-02-03 2016-08-04 Google Inc. Methods, systems, and media for suggesting a link to media content
JP6455232B2 (en) 2015-03-02 2019-01-23 株式会社リコー Image processing system, processing execution control device, image formation output control device, control program for image processing system, and control method for image processing system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090157605A1 (en) * 2004-11-23 2009-06-18 Koninklijke Philips Electronics, N.V. Method and apparatus for managing files
US20090125559A1 (en) * 2007-11-02 2009-05-14 Fujifilm Corporation Method, apparatus and system for creating interest information
US20130326406A1 (en) * 2012-06-01 2013-12-05 Yahoo! Inc. Personalized content from indexed archives
US20140282661A1 (en) * 2013-03-14 2014-09-18 Google Inc. Determining Interest Levels in Videos
US20160286235A1 (en) * 2013-12-06 2016-09-29 Huawei Technologies Co., Ltd. Image Decoding Apparatus, Image Coding Apparatus, and Coded Data Transformation Apparatus

Also Published As

Publication number Publication date
US10229719B1 (en) 2019-03-12

Similar Documents

Publication Publication Date Title
US10297286B2 (en) System and methods to associate multimedia tags with user comments and generate user modifiable snippets around a tag time for efficient storage and sharing of tagged items
US10277861B2 (en) Storage and editing of video of activities using sensor and tag data of participants and spectators
US10129515B2 (en) Display control device, recording control device, and display control method
US8583725B2 (en) Social context for inter-media objects
US9881215B2 (en) Apparatus and method for identifying a still image contained in moving image contents
US10645468B1 (en) Systems and methods for providing video segments
US20100104146A1 (en) Electronic apparatus and video processing method
CN108449631B (en) Method, apparatus and readable medium for media processing
WO2017092324A1 (en) Method and system for displaying video segment
US8943020B2 (en) Techniques for intelligent media show across multiple devices
US20240179201A1 (en) Skipping the opening sequence of streaming content
US20190096439A1 (en) Video tagging and annotation
CN104735517B (en) Information display method and electronic equipment
CN111095939A (en) Identifying previously streamed portions of a media item to avoid repeated playback
CN105814905B (en) Method and system for synchronizing use information between the device and server
CN111385591A (en) Network live broadcast method, live broadcast processing method and device, live broadcast server and terminal equipment
CN105049768A (en) Video playback method of video recording equipment
US20190206445A1 (en) Systems and methods for generating highlights for a video
US10003834B1 (en) Enhanced trick mode to enable presentation of information related to content being streamed
CN106576181A (en) Method and system for backward recording
US20160104507A1 (en) Method and Apparatus for Capturing Still Images and Truncated Video Clips from Recorded Video
US11895377B1 (en) Systems and methods for presenting videos
CN111601125A (en) Recorded and broadcast video abstract generation method and device, electronic equipment and readable storage medium
KR102202099B1 (en) Video management method for minimizing storage space and user device for performing the same
US10360942B1 (en) Systems and methods for changing storage of videos

Legal Events

Date Code Title Description
AS Assignment

Owner name: GOPRO, INC., CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:MATIAS, JOVEN;GEE, TYLER;BALNAVES, JAMES;SIGNING DATES FROM 20160422 TO 20160509;REEL/FRAME:048543/0397

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER

STPP Information on status: patent application and granting procedure in general

Free format text: FINAL REJECTION MAILED

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION

AS Assignment

Owner name: GOPRO, INC., CALIFORNIA

Free format text: RELEASE OF PATENT SECURITY INTEREST;ASSIGNOR:JPMORGAN CHASE BANK, N.A., AS ADMINISTRATIVE AGENT;REEL/FRAME:055106/0434

Effective date: 20210122