US20140289594A1 - Methods and Systems for Trimming Video Footage - Google Patents

Methods and Systems for Trimming Video Footage Download PDF

Info

Publication number
US20140289594A1
US20140289594A1 US12564226 US56422609A US2014289594A1 US 20140289594 A1 US20140289594 A1 US 20140289594A1 US 12564226 US12564226 US 12564226 US 56422609 A US56422609 A US 56422609A US 2014289594 A1 US2014289594 A1 US 2014289594A1
Authority
US
Grant status
Application
Patent type
Prior art keywords
segment
qualitative
plurality
score
based
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
US12564226
Other versions
US8856636B1 (en )
Inventor
Mike Iampietro
Arvinder Singh
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Adobe Systems Inc
Original Assignee
Adobe Systems Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date

Links

Images

Classifications

    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B27/00Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
    • G11B27/10Indexing; Addressing; Timing or synchronising; Measuring tape travel
    • G11B27/102Programmed access in sequence to addressed parts of tracks of operating record carriers
    • G11B27/105Programmed access in sequence to addressed parts of tracks of operating record carriers of operating discs
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B27/00Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
    • G11B27/10Indexing; Addressing; Timing or synchronising; Measuring tape travel
    • G11B27/19Indexing; Addressing; Timing or synchronising; Measuring tape travel by using information detectable on the record carrier
    • G11B27/28Indexing; Addressing; Timing or synchronising; Measuring tape travel by using information detectable on the record carrier by using information signals recorded by the same method as the main recording
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B27/00Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
    • G11B27/10Indexing; Addressing; Timing or synchronising; Measuring tape travel
    • G11B27/34Indicating arrangements

Abstract

A media authoring application can be configured to use metadata associated with a media presentation to generate a score which serves to identify the interesting portions of the presentation and may thereby streamline the editing process. A method can comprise accessing metadata indicating at least one characteristic of a media presentation comprising a plurality of video segments, determining a qualitative score for at least one segment of the media presentation based on the metadata, and providing a user interface displaying the plurality of segments of the media file, the interface including a visual effect for at least one segment of the media file based on the segment's qualitative score. The method may comprise removing one or more segments based on comparing the score of the segment(s) to a parameter, such as a score threshold.

Description

    TECHNICAL FIELD
  • The disclosure below generally relates to media editing, particularly to displaying information regarding media presentations that include video content and editing same.
  • BACKGROUND
  • A user can edit or create a media presentation comprising video content using any number of software applications. Examples of media authoring applications include, but are not limited to, Adobe® Premiere® Pro® and Premiere® Elements™, available from Adobe Systems Incorporated of San Jose, Calif. In an exemplary media authoring application, the user is presented a timeline view and one or more tracks for positioning video, audio, and other components (e.g., effects overlays). Video components may be represented in the tracks using one or more frames of the video.
  • SUMMARY
  • Although media authoring applications can comprise powerful and effective tools for creating/editing presentations, the process of editing can become cumbersome to users, especially amateur users. In accordance with one or more aspects of the present subject matter, a media authoring application can be configured to use metadata associated with a media presentation to generate a score which serves to identify the portions of the presentation that are of interest (or not of interest) and may thereby streamline the editing process.
  • Embodiments include a method comprising accessing metadata indicating at least one characteristic of footage of a media presentation. The metadata may be provided, for example, by one or more other modules that analyze media presentations or portions thereof upon import or use. The method can comprise determining a qualitative score for at least one segment of the media file based on the metadata and providing a user interface displaying the plurality of segments of the media file. The interface can include a visual effect such as highlighting for at least one segment of the media file based on the segment's qualitative score. In some embodiments, the method can comprise removing one or more segments based on comparing the score of the segment(s) to a parameter, such as a score threshold.
  • As will be noted below, the score can be based on one or more of a lighting, focus, contrast, or camera shake characteristic in the segment and/or a degree of movement of one or more subjects in the segment, recognition of one or more faces in the segment, audio content of the segment, or zoom/focus on a subject in the segment. Additionally or alternatively, the score may take into account camera movement before the segment, camera movement during the segment, camera movement after the segment, and/or the presence or absence of keyword tags in the metadata for the segment. In some embodiments, the score for a segment comprises a score for each of a plurality of frames of the presentation.
  • Embodiments also include systems and computer-readable media implementing one or more aspects of the present subject matter. These illustrative embodiments are mentioned not to limit or define the limits of the present subject matter, but to provide examples to aid understanding thereof. Illustrative embodiments are discussed in the Detailed Description, and further description is provided there. Advantages offered by various embodiments may be further understood by examining this specification and/or by practicing one or more embodiments of the claimed subject matter.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • A full and enabling disclosure is set forth more particularly in the remainder of the specification. The specification makes reference to the following appended figures.
  • FIG. 1 is a block diagram illustrating an exemplary computing system configured to provide a user interface and qualitative score for use in editing a media presentation.
  • FIG. 2 is a diagram illustrating an example of a user interface for editing one or more segments of a media presentation.
  • FIG. 3 is a flow chart illustrating exemplary steps in a method of editing a media presentation.
  • FIG. 4 is a flow chart illustrating exemplary steps in a method of generating a qualitative score.
  • FIG. 5 is a diagram illustrating an example of a user interface including a visual effect based on a qualitative score for one or more segments of a media presentation.
  • DETAILED DESCRIPTION
  • Reference will now be made in detail to various and alternative exemplary embodiments and to the accompanying drawings. Each example is provided by way of explanation, and not as a limitation. It will be apparent to those skilled in the art that modifications and variations can be made. For instance, features illustrated or described as part of one embodiment may be used on another embodiment to yield a still further embodiment. Thus, it is intended that this disclosure includes modifications and variations as come within the scope of the appended claims and their equivalents.
  • In the following detailed description, numerous specific details are set forth to provide a thorough understanding of the claimed subject matter. However, it will be understood by those skilled in the art that claimed subject matter may be practiced without these specific details. In other instances, methods, apparatuses or systems that would be known by one of ordinary skill have not been described in detail so as not to obscure the claimed subject matter.
  • FIG. 1 is a block diagram illustrating an exemplary computing system 100 configured to provide a user interface and qualitative score for use in editing a media presentation. In this example, computing system 100 comprises a platform 102 having a processor 104 with access via bus 106 to a tangible computer-readable medium 108 embodying program components. In this example, the program components reside in memory, which can comprise RAM, ROM, Flash memory, or any other suitable storage medium. Additional components include input-output (I/O) handling components 110, a display 120, user input (UI) devices 116 (e.g., a mouse, keyboard, touch screen interface, etc.), one or more networking or other interfaces 118 (e.g., Ethernet, USB, etc.), and storage 114 (e.g., hard disk, optical drive(s)).
  • The program components in memory 108 comprise a media authoring application including a user interface module 122, a presentation management module 124, and a qualitative evaluation module 126. Memory 108 also includes presentation data 128 which can comprise data for providing a media presentation such as a video. As will be noted below, presentation data 128 can also comprise metadata describing one or more aspects of the media presentation that can be used by the media authoring application to provide an enhanced editing workflow. Memory 108 may include further components such as an operating system, runtime environment, and/or one or more other applications.
  • In some embodiments, user interface module 122 configures computing system 102 to provide a user interface for editing footage of a media presentation. The footage may comprise one or more portions having video and/audiovisual content. For example, user interface module 122 may generate a timeline or other view for adjusting audio, visual, and other components of the presentation in response to user input including pointer gestures (e.g., mouse clicks, drags, hovers, etc.), keyboard input, and/or other input. In some embodiments, user interface module 122 also provides one or more visual effects to identify one or more portions of a media presentation as being of interest based on evaluation of presentation characteristics reflected in the metadata.
  • Presentation management module 124 represents functional components of the authoring application that are used to track and implement changes to media presentations. For example, presentation management module 124 can be configured to handle the details of locating specified video components, generating overlay and transition effects, and otherwise providing media editing functionality as is known in the art. In accordance with some aspects of the present subject matter, presentation management module 124 can be used to access metadata and, in some instances, update the metadata based on qualitative evaluations. Additionally, presentation management module 124 can be configured to perform automatic edits based on qualitative evaluations, including, but not limited to, removing one or more segments of a media presentation that are not sufficiently interesting as reflected in qualitative scores for the segments and/or other analysis.
  • Qualitative evaluation module 126 configures computing system 102 to access metadata associated with the media presentation and use the metadata to determine a qualitative score for at least one video segment of the media presentation as a measure of how interesting the segment is. In some embodiments, a qualitative score is determined for each frame of the media presentation by accumulating score values for various factors identified from an analysis of metadata associated with the frame. The measure of whether a segment is interesting can be based on any number of factors, including, but not limited to, factors representative of the quality of the segment, content of the segment, and/or the content of metadata, such as keywords associated with the segment.
  • Metadata used by qualitative evaluation module 126 may be included in the media presentation data 124 and/or may be generated by authoring application 116 in accordance with various techniques, such as object recognition/tracking algorithms, speech-to-text processing, facial recognition algorithms, image evaluation algorithms, and the like. Regardless of the source of the metadata, qualitative evaluation module 126 can identify the presence, absence, or values associated with each of one or more factors and then use data correlating factors to score values to generate a score representative of the degree to which the segment under consideration is to be considered of interest (i.e. interesting).
  • In the example above of FIG. 1, the media authoring application was depicted at a local device. Embodiments include a media application that is provided remotely, such as via a web service. For instance, a client may use a browser or client-side application (e.g., an Adobe® Flash® or Air® client) to interface with a web service that provides data for generating the user interface at the client and response to input relayed via the client, with the scoring and management functions handled by one or more computing devices of the web service. Additionally, the functionality attributed to different modules is for purposes of example only, and in various embodiments functionality may be distributed differently.
  • FIG. 2 is a diagram illustrating an example of a user interface 200 for editing one or more segments of a media presentation. In this example, interface 200 includes a preview area 202 which may depict the media presentation during playback or may indicate a still image of a selected frame. Information area 204 may display the file name for the media presentation and other information such as file type, path, and other information. Playback controls 206 may include play, pause, rewind, fast-forward, volume, and other controls.
  • Edit area 208 can include a plurality of tracks 210, 212, 214, 216, 218, and 220 for assembling various audio, visual, and other components into a composite media presentation. In this example, the components are laid out along a timeline T. A play head 222 may advance along timeline T from left to right as the media presentation is played and may also be positionable to select a given portion of the presentation.
  • In this example, the media presentation includes a plurality of video components 224 placed in track 212 and a plurality of audio components 226, 228, 230, and 232 placed in audio tracks 214, 216, 218, and 220. Track 210 may be used, for example, to place components corresponding to visual effects (e.g., text overlays, special effects, etc.). Each component can correspond to a respective file or portion of a file accessible by the media authoring application, for instance. In this example, a plurality of video components are shown, although the subject matter is applicable to analysis of segments of a single video component. Additionally, in practice, more or fewer tracks may be supported by the media authoring application and/or another visual representation of the components of the media presentation can be used, such as a list view, icon view, etc.
  • As shown at 224A, 224B, and 224C, video components 224 may be depicted as a series of frames or key frames representing portions of the video footage. In accordance with the present subject matter, visual effects 225A, 225B, and 225C are added to reflect the relative interest of segments of the video components as determined from the metadata associated with the respective segments. As indicated by the different cross-hatching, visual effects 225A, 225B, and 225C represent different colors in a color bar that is parallel to the frames 224 representing the video segments. As another example, the visual effects may comprise highlighting over the frames themselves.
  • For instance, effect 225A may indicate that a segment from T0 to T4 is of low interest by using a first color, such as red. Visual effect 225B may indicate that a segment from T6 to T8 is of high interest using a different color, such as green. Visual effect 225C may indicate that a segment from T10 to T12 is of moderate interest using a third color, such as yellow. It will be understood that any suitable color scheme can be used and the scheme could be defined by a user of the application.
  • Any suitable visual effect(s) can be used. In addition to or instead of a color bar or highlighting frames, a textual or numerical overlay could be included indicating an interest level in terms of score or a descriptor mapped to a score (e.g., “interesting,” “moderately interesting,” “not interesting,” etc.). As a further example, animation, ghosting-out of content with low score values, or other effects could be used.
  • FIG. 3 is a flow chart illustrating exemplary steps in a method 300 for editing a media presentation. Block 302 represents accessing metadata included in or associated with one or more media files of a media presentation. For example, the metadata may be generated by a media authoring application and/or may be included with the media file(s) by the device(s) that capture the files. Block 304 represents determining a qualitative score for at least one segment of the media presentation by using the metadata.
  • As will be discussed later below, block 306 represents indicating the score results in the user interface using one or more visual or other effects, while block 308 represents a step taken in some embodiments to remove one or more portions of the media presentation or otherwise edit the content of the presentation based on comparing the score to one or more parameters.
  • Returning to block 304, the score can be determined in any number of ways. As an example, in some embodiments the metadata includes information relating to the number of subjects in a segment. For example, object or shape recognition algorithms may provide a count for the number of people or other subjects of interest and include that information in the metadata. A score value can be generated by applying a multiplier to the number of subjects recognized in the segment. As another example, the qualitative analysis module can be configured to identify a change in a number of subjects depicted in the video segment as compared to an earlier segment from the metadata and use the change in determining the qualitative score.
  • In some embodiments, the amount of motion in a segment influences the score. For example, motion of one or more subjects may correlate to a higher score, with the amount of motion reflected in the metadata based on the number of moving subjects, speed, and other considerations.
  • As another example, the qualitative evaluation module can be configured to identify facial recognition results in the metadata and use the facial recognition results in determining the qualitative score. For example, the score may be increased based on the recognition of any faces so as to identify portions of the presentation including human faces as being of interest. As another example, the score may be increased (or decreased) based on recognition of specific faces. For example, the qualitative evaluation module may access a list of identifiers of faces of interest and increase the score for one or more segments found to contain the faces of interest. In addition to or instead of facial recognition, metadata indicating recognition of other subjects (e.g., animals, objects, scenes) may be used.
  • In some embodiments, the qualitative evaluation module is configured to identify a keyword of interest and use the presence or absence of the keyword of interest in the metadata for the segment in determining the qualitative score. For example, the metadata may include keywords input by one or more users and/or as the result of automatic analysis of content of the segment, such as speech-to-text conversion, optical character recognition, or input of a script or other data regarding the segment. If a segment includes keywords identified by a user as correlating to interesting segments, segments with those keywords may be scored higher than segments without those keywords.
  • As a further example, the qualitative evaluation module can rely on audio metadata. For instance, the qualitative evaluation module may be configured to identify an audio characteristic indicated in the metadata such as the number of speakers, language(s) spoken, or the presence or absence of speech and use the audio characteristic in determining the qualitative score. For instance, a segment containing speech of any kind may receive a higher score than a segment with no speech or inaudible speech. As another example, particular words, phrases, or other audio content may be reflected in the metadata and used to increase or decrease the score of a segment accordingly.
  • As yet another example, the qualitative evaluation module may be configured to use metadata identifying one or more characteristics related to the quality of the content of the segment. For instance, the metadata may reflect the lighting level, focus or lack thereof, contrast level, or degree of camera shake in the segment. Factors indicating higher quality (e.g., high contrast, low or no shake, etc.) may result in a higher score for the segment.
  • In some embodiments, the qualitative evaluation module can use metadata reflecting camera or other capture device activity in determining a score for a segment. Information on camera motion and activity can be accessed in some embodiments from tags included in the metadata, the tags denoting information such as the type of motion, direction, and speed.
  • For example, metadata regarding camera motion before, during, and/or after the segment can be used to infer whether the camera zooms in on a subject or zooms out for a wider view. Segments comprising the zoomed-in view and the wide view may receive higher scores than the segment(s) comprising zoom-in-progress, with the zoomed-in view receiving the highest score. As another example, camera motion such as panning can be interpreted for scoring purposes. A fast pan may indicate that the segment at the end of the pan motion is interesting, and so the segment may receive a higher score. A slow pan can be interpreted as indicating that the segment(s) comprising the pan are of interest.
  • Returning to block 306 of FIG. 3, once the qualitative score for a segment is available, the score can be indicated in the user interface using one or more visual or other effects. For example, a color bar indicating interest level using different colors adjacent to corresponding segments can be used or the segments may be highlighted or annotated as noted above.
  • Block 308 represents removing one or more portions or otherwise editing the media presentation based on comparing the scores of respective segments to a parameter or parameters. For example, the score for a segment may be compared to a threshold score and, if the score is below the threshold, the segment may be identified for removal. In some embodiments, the segment may be removed automatically or removal may be confirmed by user input (e.g., via a dialog box). The parameter(s) may be associated with logic specifying desired editing actions including, but not limited to, removal of uninteresting footage. For example, a user may provide a rule that the play rate for uninteresting segments is to be increased.
  • As another example, the media authoring application may include an interface such as a dialog in which a user can specify desired attributes of a final cut of a media presentation, such as a specific duration. The media authoring application can then determine a score for portions of footage input to the application and remove enough footage to arrive at the specified duration, starting from the least-interesting segments as indicated by scoring results.
  • In addition to or instead of a threshold score, the removal/editing parameter(s) may include more complex logic, such as rules or parameters for specifying whether to keep or discard footage based on the presence or absence of specific attributes or content. As an example, Boolean logic can be used to specify the threshold for removal or other editing in closer detail. For instance, a user may specify that uninteresting footage is to be marked for removal subject to (i) the score being below the threshold and (ii) no dialog in the footage. As another example, the conditions may specify that footage is to be removed if (i) it has a score below the threshold and (ii) no facial recognition results showing a face of interest may be marked for removal. Thus, footage that would otherwise be removed may be saved from removal.
  • Editing decisions may be based on a segment but may affect portions of the media presentation outside the segment. For example, video footage of a media presentation may be included in one or more video components. A segment of interesting (or uninteresting) content may occupy some or all of a component or may span multiple components. If, for example, uninteresting content is to be trimmed, the respective portions of the video components may be trimmed so that only the portions with interesting content remain. However, in some embodiments, an entire component can be retained or removed based on the presence or absence of interesting (or uninteresting) content in a segment included in the component.
  • FIG. 4 is a flow chart illustrating exemplary steps in a method 400 of generating a qualitative score for a segment. In this example, a score value is accumulated for each frame based on analyzing multiple factors.
  • At block 402, a scoring schema is accessed. The scoring schema can comprise data mapping characteristics to scoring behaviors and may be selectable or modifiable by a user. For example, the scoring schema may specify the lighting, contrast, and other quality conditions for providing a positive score. Additionally, the scoring schema may identify faces and keywords of interest and other characteristics and characteristic values as well as the influence of various characteristics on the scoring process. For example, facial recognition results may be weighted more heavily than camera shake.
  • At block 404, metadata for a frame is accessed, and at block 406 one or more score values for the characteristics reflected in the metadata can be identified. For instance, positive score values may be associated with high quality/interesting footage, while negative score values may be associated with low quality/uninteresting footage as set forth in the schema. Block 408 represents accumulating the score values and assigning an aggregate score to the frame. In some embodiments, Boolean logic may factor into the schema and/or aggregation process.
  • Block 410 represents a check for additional frames. If more frames remain to be analyzed, then block 412 represents looping through the next frame until all frames of the presentation or portion thereof have been evaluated.
  • In some embodiments, additional processing can occur once scores have been assigned to a plurality of frames. For example, smoothing may be performed to remove noise in the score values such as sharp spikes or drops in the score between frames in a group having otherwise similar scores. The smoothed set of scores may allow for more effective indications of interesting segments and/or cleaner edits.
  • As was noted above, in some embodiments, visual effects are generated to indicate trimmable segments based on a qualitative score that does not meet a threshold value. However, the trimmable segments may be derived from a plurality of shorter segments that are then merged to create resultant segments, with smoothing occurring during the merge process. For instance, the media presentation may be treated as one continuous set of footage and then segments analyzed on a frame-by-frame basis as noted above. Then, short groups of frames with similar scores can be defined, while ignoring very short (e.g., 2-3 frames) runs of frames having scores diverging from the surrounding groups of frames. The short groups (along with very short groups included therein) can be merged into resultant segments that are used as the basis for indicating trimmable segments.
  • FIG. 5 is a diagram 500 illustrating an example of a user interface including a visual effect based on a qualitative score for one or more segments of a media presentation. In this example, an interface 502 is presented including frames 504, 506, 508, 510, and 512, with each frame corresponding to a segment of a video presentation. In practice, each segment may be represented by a plurality of frames rather than the single frames shown here.
  • Frame 504 represents an initial segment such as when a video recording device is initially turned on and positioned. As indicated at 514, a first subject is visible, but the horizon is skewed and as shown at 516 the camera is shaking. In the next segment as exemplified by frame 506, the first subject 514 is visible and shaking has stopped. Moving into the third segment, indicated by frame 508, the camera has panned to depict a second subject 516. In the fourth segment, the camera zooms in to second subject 516 as shown in frame 510. In frame 512, the presentation fades to black as shown at 520.
  • Metadata for frames 504, 506, 508, 510, and 512 may include information regarding the presentation that can be used in generating scores as noted above. For example, metadata for frame 504 and related frames may indicate the degree of camera shake and the rotational motion of the camera during the initial segment as the camera is set up. This may result in a low score for the initial segment. Facial recognition analysis may identify that frame 506 and surrounding frames of the second segment include a human face which, in conjunction with audio may result in a higher score for the second segment.
  • The third segment as indicated by frame 508 features two human faces. Assuming that the third segment also includes dialog or other activity, the score for the third segment may be higher than both the first and second segments. The fourth segment may receive a still higher score based on the zoom to subject 516, which may be used to infer a high interest level. Additionally, facial recognition results in the metadata may identify that second subject 516 matches faces in other videos and/or matches a face identified as being of interest. The final segment as indicated by frame 512 may receive a low score due to the low lighting/lack of lighting or other visible subjects.
  • Interface 502 may be presented in a media authoring or other application with highlighting or another visual effect indicating that frames 506, 508, and 510 are interesting and/or an effect indicating that frames 504 and 512 are not interesting. Additionally or alternatively, the degree of interest (disinterest) may be reflected using different colors, numerical scores, or other identifiers. In some embodiments, frames 504 and 512 may be automatically trimmed from the media presentation with or without confirmation by a user.
  • Interface 502 also includes another example of a visual effect. As shown at 518, a tooltip has been provided in response to a user gesture with respect to frame 510. In this example, tooltip 518 indicates the time index (03:35:46) for the frame and factors used in deriving the score for the frame (in this example, “zoom++”, “face+++”, and “dialog++”). No negative factors are shown in this example, but negative factors may be displayed; raw score information may be provided as well or instead of the “+” indicators of this example.
  • General Considerations
  • Some portions of the detailed description were presented in terms of algorithms or symbolic representations of operations on data bits or binary digital signals stored within a computing system memory, such as a computer memory. These algorithmic descriptions or representations are examples of techniques used by those of ordinary skill in the data processing arts to convey the substance of their work to others skilled in the art. An algorithm is here and generally is considered to be a self-consistent sequence of operations or similar processing leading to a desired result. In this context, operations or processing involve physical manipulation of physical quantities.
  • Typically, although not necessarily, such quantities may take the form of electrical or magnetic signals capable of being stored, transferred, combined, compared or otherwise manipulated. It has proven convenient at times, principally for reasons of common usage, to refer to such signals as bits, data, values, elements, symbols, characters, terms, numbers, numerals or the like. It should be understood, however, that all of these and similar terms are to be associated with appropriate physical quantities and are merely convenient labels.
  • Unless specifically stated otherwise, as apparent from the foregoing discussion, it is appreciated that throughout this specification discussions utilizing terms such as “processing,” “computing,” “calculating,” “determining” or the like refer to actions or processes of a computing platform, such as one or more computers and/or a similar electronic computing device or devices, that manipulate or transform data represented as physical electronic or magnetic quantities within memories, registers, or other information storage devices, transmission devices, or display devices of the computing platform.
  • The various systems discussed herein are not limited to any particular hardware architecture or configuration. A computing device can include any suitable arrangement of components that provide a result conditioned on one or more inputs. Suitable computing devices include multipurpose microprocessor-based computer systems accessing stored software, that programs or configures the computing system from a general-purpose computing apparatus to a specialized computing apparatus implementing one or more embodiments of the present subject matter. Any suitable programming, scripting, or other type of language or combinations of languages may be used to implement the teachings contained herein in software to be used in programming or configuring a computing device.
  • Embodiments of the methods disclosed herein may be performed in the operation of such computing devices. The order of the blocks presented in the examples above can be varied—for example, blocks can be re-ordered, combined, and/or broken into sub-blocks. Certain blocks or processes can be performed in parallel.
  • As noted above, a computing device may access one or more computer-readable media that tangibly embody computer-readable instructions which, when executed by at least one computer, cause the at least one computer to implement one or more embodiments of the present subject matter. When software is utilized, the software may comprise one or more components, processes, and/or applications. Additionally or alternatively to software, the computing device(s) may comprise circuitry that renders the device(s) operative to implement one or more of the methods of the present subject matter.
  • Examples of computing devices include, but are not limited to, servers, personal computers, personal digital assistants (PDAs), cellular telephones, televisions, television set-top boxes, portable music players, and consumer electronic devices such as cameras, camcorders, and mobile devices. Computing devices may be integrated into other devices, e.g. “smart” appliances, automobiles, kiosks, and the like.
  • The inherent flexibility of computer-based systems allows for a great variety of possible configurations, combinations, and divisions of tasks and functionality between and among components. For instance, processes discussed herein may be implemented using a single computing device or multiple computing devices working in combination. Databases and applications may be implemented on a single system or distributed across multiple systems. Distributed components may operate sequentially or in parallel.
  • When data is obtained or accessed as between a first and second computer system or components thereof, the actual data may travel between the systems directly or indirectly. For example, if a first computer accesses data from a second computer, the access may involve one or more intermediary computers, proxies, and the like. The actual data may move between the first and second computers, or the first computer may provide a pointer or metafile that the second computer uses to access the actual data from a computer other than the first computer, for instance. Data may be “pulled” via a request, or “pushed” without a request in various embodiments.
  • The technology referenced herein also makes reference to communicating data between components or systems. It should be appreciated that such communications may occur over any suitable number or type of networks or links, including, but not limited to, a dial-in network, a local area network (LAN), wide area network (WAN), public switched telephone network (PSTN), the Internet, an intranet or any combination of hard-wired and/or wireless communication links.
  • Any suitable tangible computer-readable medium or media may be used to implement or practice the presently-disclosed subject matter, including, but not limited to, diskettes, drives, magnetic-based storage media, optical storage media, including disks (including CD-ROMS, DVD-ROMS, and variants thereof), flash, RAM, ROM, and other memory devices.
  • The use of “adapted to” or “configured to” herein is meant as open and inclusive language that does not foreclose devices adapted to or configured to perform additional tasks or steps. Additionally, the use of “based on” is meant to be open and inclusive, in that a process, step, calculation, or other action “based on” one or more recited conditions or values may, in practice, be based on additional conditions or values beyond those recited. Headings, lists, and numbering included herein are for ease of explanation only and are not meant to be limiting.
  • While the present subject matter has been described in detail with respect to specific embodiments thereof, it will be appreciated that those skilled in the art, upon attaining an understanding of the foregoing may readily produce alterations to, variations of, and equivalents to such embodiments. Accordingly, it should be understood that the present disclosure has been presented for purposes of example rather than limitation, and does not preclude inclusion of such modifications, variations and/or additions to the present subject matter as would be readily apparent to one of ordinary skill in the art.

Claims (26)

  1. 1. A computer-implemented method comprising:
    accessing metadata indicating at least one characteristic of footage of a media presentation, the footage comprising a plurality of segments;
    determining a qualitative score for at least one segment of the footage based on the metadata; and
    providing a user interface displaying the plurality of segments, wherein the user interface includes a visual effect for each segment selected from a plurality of visual effects based at least in part on the segment's qualitative score relative to a threshold value, wherein selections of visual effects for corresponding video segments vary based on qualitative scores of the corresponding video segments relative to a plurality of thresholds.
  2. 2. The computer-implemented method set forth in claim 1, wherein the qualitative score is determined for a plurality of frames of the media presentation.
  3. 3. The computer-implemented method set forth in claim 1, wherein the qualitative score for the segment is determined based on metadata indicating at least one of a lighting, focus, contrast, or camera shake characteristic in the segment.
  4. 4. The computer-implemented method set forth in claim 1, wherein the qualitative score for the segment is determined based on metadata indicating at least one of:
    a degree of movement of one or more subjects in the segment, recognition of one or more faces in the segment, audio content of the segment, or zoom/focus on a subject in the segment.
  5. 5. The computer-implemented method set forth in claim 1, wherein the qualitative score for the segment is based on evaluating at least one of camera movement before the segment, camera movement during the segment, or camera movement after the segment.
  6. 6. The computer-implemented method set forth in claim 1, wherein the metadata comprises one or more keyword tags and the qualitative score for the segment is determined based on identifying one or more keyword tags of interest in the metadata for the segment.
  7. 7. The computer-implemented method set forth in claim 1, further comprising:
    comparing the qualitative score for the segment to a parameter; and
    removing the segment from the media presentation if the segment does not meet the parameter.
  8. 8. A computing system comprising a processor with access to a tangible computer-readable medium embodying program components, the program components comprising:
    a user interface module that configures the computing system to provide a user interface for editing a media presentation comprising a plurality of video segments; and
    a qualitative evaluation module that configures the computing system to access metadata associated with the media presentation and use the metadata to determine a plurality of qualitative scores for the plurality of video segments of the media presentation, and
    wherein the user interface module generates a plurality of visual effects for the plurality of video segments based at least in part on the plurality of qualitative scores and a plurality of threshold values, wherein the plurality of visual effects for the plurality of video segments vary based on the plurality of qualitative scores relative to the plurality of threshold values.
  9. 9. The computing system set forth in claim 8, further comprising a content management module that configures the computing system to automatically remove the video segment based on comparing the qualitative score to a removal parameter.
  10. 10. The computing system set forth in claim 8, wherein the user interface module further configures the computing system to provide a visual effect in the user interface, the visual effect associated with a video segment of the media presentation provided based on the qualitative score for the video segment renders information included in the metadata at each video segment used to determine the qualitative score for the respective video segment.
  11. 11. The computing system set forth in claim 8, wherein the qualitative evaluation module is configured to identify a change in a number of subjects depicted in the video segment from the metadata and use the change in determining a qualitative score for at least one video segment of the media presentation.
  12. 12. The computing system set forth in claim 8, wherein the qualitative evaluation module is configured to identify facial recognition results in the metadata and use the facial recognition results in determining a qualitative score for at least one video segment of the media presentation.
  13. 13. The computing system set forth in claim 8, wherein the qualitative evaluation module is configured to identify a keyword of interest and use the presence or absence of the keyword of interest in determining a qualitative score for at least one video segment of the media presentation.
  14. 14. The computing system set forth in claim 8, wherein the qualitative evaluation module is configured to identify an audio characteristic in the metadata and use the audio characteristic in determining a qualitative score for at least one video segment of the media presentation.
  15. 15. The computing system set forth in claim 8, wherein the qualitative evaluation module is configured to use metadata identifying at least one of the lighting, focus, contrast, or camera shake in determining a qualitative score for at least one video segment of the media presentation.
  16. 16. A computer program product comprising a non-transitory computer readable medium tangibly embodying program code comprising:
    program code for accessing metadata indicating at least one characteristic of a media presentation comprising a plurality of segments;
    program code for determining a qualitative score for at least one segment of the media file based on the metadata;
    program code for storing data identifying the qualitative score and the segment in a computer-readable medium; and
    program code for providing a user interface displaying the plurality of segments, wherein the user interface includes a visual effect for at least one segment based on the segment's qualitative score, wherein visual effects for the plurality of segments change based on qualitative scores of the plurality of segments relative to a plurality of thresholds; and
    program code for providing a timeline view of the plurality of segments and the visual effect for the at least one segment included in the user interface.
  17. 17. The computer program product set forth in claim 16, further comprising:
    program code for providing the user interface comprising a timeline view of a media presentation wherein the visual effect comprises a coloring effect that includes one of a plurality of colors selected based at least in part on the qualitative score for the respective segment relative to a threshold value.
  18. 18. The computer program product set forth in claim 16, further comprising:
    program code for removing the segment from the media presentation based on comparing the qualitative score to a parameter.
  19. 19. The computer program product set forth in claim 16, wherein determining the qualitative score for the segment comprises determining a qualitative score for each frame of the media presentation.
  20. 20. The computer program product set forth in claim 16, wherein the qualitative score for the segment represents a quality factor or an interest factor for the segment.
  21. 21. The computer-implemented method of claim 1, wherein the visual effect is included in the user interface at a location that corresponds to a location of the respective segment.
  22. 22. The computer-implemented method of claim 1, wherein the visual effect is a coloring effect comprising one of a plurality of colors.
  23. 23. The computer-implemented method of claim 22, further comprising selecting one of the plurality of colors for the coloring effect based at least in part on the qualitative score for the respective segment relative to the threshold value.
  24. 24. The computer system set forth in claim 10, wherein the information included in the metadata is rendered upon receiving a user gesture that identifies the video segment.
  25. 25. The computer-implemented method of claim 1, wherein the at least one segment is associated with a plurality of parameters, wherein a parameter score is determined for each parameter of the plurality of parameters, and wherein a visual effect for the at least one segment comprises indications of the parameter scores and an indication of the qualitative score for the at least one segment.
  26. 26. The computer-implemented method of claim 1, further comprising:
    determining a rule for editing the media presentation, the rule specifying editing actions based on qualitative scores relative to thresholds and based on attributes of the media presentation;
    determining an editing action from the editing actions for the at least one segment based on a qualitative score of the at least one segment relative to a threshold and based on an attribute of the last one segment; and
    editing the at least one segment according to the editing action.
US12564226 2009-09-22 2009-09-22 Methods and systems for trimming video footage Active 2032-03-16 US8856636B1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US12564226 US8856636B1 (en) 2009-09-22 2009-09-22 Methods and systems for trimming video footage

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US12564226 US8856636B1 (en) 2009-09-22 2009-09-22 Methods and systems for trimming video footage

Publications (2)

Publication Number Publication Date
US20140289594A1 true true US20140289594A1 (en) 2014-09-25
US8856636B1 US8856636B1 (en) 2014-10-07

Family

ID=51570067

Family Applications (1)

Application Number Title Priority Date Filing Date
US12564226 Active 2032-03-16 US8856636B1 (en) 2009-09-22 2009-09-22 Methods and systems for trimming video footage

Country Status (1)

Country Link
US (1) US8856636B1 (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140029911A1 (en) * 2012-07-26 2014-01-30 Canon Kabushiki Kaisha Movie processing apparatus and control method thereof
US20140152875A1 (en) * 2012-12-04 2014-06-05 Ebay Inc. Guided video wizard for item video listing
US20150134673A1 (en) * 2013-10-03 2015-05-14 Minute Spoteam Ltd. System and method for creating synopsis for multimedia content
US20150154000A1 (en) * 2012-06-26 2015-06-04 Sony Corporation Information processing device, information processing method, and program
US20150172787A1 (en) * 2013-12-13 2015-06-18 Amazon Technologies, Inc. Customized movie trailers
US20160042249A1 (en) * 2014-08-06 2016-02-11 Dropbox, Inc. Event-based image classification and scoring
US9697564B2 (en) 2012-06-18 2017-07-04 Ebay Inc. Normalized images for item listings
US9820004B1 (en) * 2013-12-03 2017-11-14 Google Inc. Optimizing timing of display of a video overlay

Citations (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6366296B1 (en) * 1998-09-11 2002-04-02 Xerox Corporation Media browser using multimodal analysis
US6757027B1 (en) * 2000-02-11 2004-06-29 Sony Corporation Automatic video editing
US20050111824A1 (en) * 2003-06-25 2005-05-26 Microsoft Corporation Digital video segmentation and dynamic segment labeling
US20050234719A1 (en) * 2004-03-26 2005-10-20 Tatsuya Hosoda Selection of images for image processing
US20060224940A1 (en) * 2005-04-04 2006-10-05 Sam Lee Icon bar display for video editing system
US20060228029A1 (en) * 2005-03-29 2006-10-12 Microsoft Corporation Method and system for video clip compression
US20060288288A1 (en) * 2005-06-17 2006-12-21 Fuji Xerox Co., Ltd. Methods and interfaces for event timeline and logs of video streams
US20070074115A1 (en) * 2005-09-23 2007-03-29 Microsoft Corporation Automatic capturing and editing of a video
US20070248327A1 (en) * 2006-04-21 2007-10-25 Cyberlink Corp. Systems and methods for detecting exciting scenes in sports video
US7293280B1 (en) * 1999-07-08 2007-11-06 Microsoft Corporation Skimming continuous multimedia content
US20070283269A1 (en) * 2006-05-31 2007-12-06 Pere Obrador Method and system for onboard camera video editing
US7313808B1 (en) * 1999-07-08 2007-12-25 Microsoft Corporation Browsing continuous multimedia content
US20080019661A1 (en) * 2006-07-18 2008-01-24 Pere Obrador Producing output video from multiple media sources including multiple video sources
US20080019669A1 (en) * 2006-07-18 2008-01-24 Sahra Reza Girshick Automatically editing video data
US20080249892A1 (en) * 2007-04-04 2008-10-09 Disney Enterprises, Inc. Method and system for determining a value of a portion of content
US20090041356A1 (en) * 2006-03-03 2009-02-12 Koninklijke Philips Electronics N.V. Method and Device for Automatic Generation of Summary of a Plurality of Images
US20100111498A1 (en) * 2006-09-27 2010-05-06 Koninklijke Philips Electronics N.V. Method of creating a summary
US7739601B1 (en) * 2002-01-23 2010-06-15 Microsoft Corporation Media authoring and presentation

Patent Citations (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6366296B1 (en) * 1998-09-11 2002-04-02 Xerox Corporation Media browser using multimodal analysis
US7313808B1 (en) * 1999-07-08 2007-12-25 Microsoft Corporation Browsing continuous multimedia content
US7293280B1 (en) * 1999-07-08 2007-11-06 Microsoft Corporation Skimming continuous multimedia content
US6757027B1 (en) * 2000-02-11 2004-06-29 Sony Corporation Automatic video editing
US7739601B1 (en) * 2002-01-23 2010-06-15 Microsoft Corporation Media authoring and presentation
US20050111824A1 (en) * 2003-06-25 2005-05-26 Microsoft Corporation Digital video segmentation and dynamic segment labeling
US20050234719A1 (en) * 2004-03-26 2005-10-20 Tatsuya Hosoda Selection of images for image processing
US20060228029A1 (en) * 2005-03-29 2006-10-12 Microsoft Corporation Method and system for video clip compression
US20060224940A1 (en) * 2005-04-04 2006-10-05 Sam Lee Icon bar display for video editing system
US20060288288A1 (en) * 2005-06-17 2006-12-21 Fuji Xerox Co., Ltd. Methods and interfaces for event timeline and logs of video streams
US20070074115A1 (en) * 2005-09-23 2007-03-29 Microsoft Corporation Automatic capturing and editing of a video
US20090041356A1 (en) * 2006-03-03 2009-02-12 Koninklijke Philips Electronics N.V. Method and Device for Automatic Generation of Summary of a Plurality of Images
US20070248327A1 (en) * 2006-04-21 2007-10-25 Cyberlink Corp. Systems and methods for detecting exciting scenes in sports video
US20070283269A1 (en) * 2006-05-31 2007-12-06 Pere Obrador Method and system for onboard camera video editing
US20080019661A1 (en) * 2006-07-18 2008-01-24 Pere Obrador Producing output video from multiple media sources including multiple video sources
US20080019669A1 (en) * 2006-07-18 2008-01-24 Sahra Reza Girshick Automatically editing video data
US20100111498A1 (en) * 2006-09-27 2010-05-06 Koninklijke Philips Electronics N.V. Method of creating a summary
US20080249892A1 (en) * 2007-04-04 2008-10-09 Disney Enterprises, Inc. Method and system for determining a value of a portion of content

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Barbieri et al., MOVIE-IN-A-MINUTE: AUTOMATICALLY GENERATED VIDEO PREVIEWS, Springer, Intelligent Algorithms in Ambient and Biomedical Computing, 2006, pg.89-101. *
Girgensohn et al., A SEMI-AUTOMATIC APPROACH TO HOME VIDEO EDITING, ACM, Proceedings of the 13th Annual ACM Symposium on User Interface Software and Technology, San Diego, CA, 2000, pg.81-89. *
Hua et al., OPTIMIZATION-BASED AUTOMATED HOME VIDEO EDITING SYSTEM, IEEE, IEEE Transactions on Circuits and Systems for Video Technology, Vol.14, No.5, May 2004, pg.572-583. *

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9697564B2 (en) 2012-06-18 2017-07-04 Ebay Inc. Normalized images for item listings
US20150154000A1 (en) * 2012-06-26 2015-06-04 Sony Corporation Information processing device, information processing method, and program
US20140029911A1 (en) * 2012-07-26 2014-01-30 Canon Kabushiki Kaisha Movie processing apparatus and control method thereof
US9560333B2 (en) * 2012-07-26 2017-01-31 Canon Kabushiki Kaisha Movie processing apparatus and control method thereof
US9554049B2 (en) * 2012-12-04 2017-01-24 Ebay Inc. Guided video capture for item listings
US20140152875A1 (en) * 2012-12-04 2014-06-05 Ebay Inc. Guided video wizard for item video listing
US20150134673A1 (en) * 2013-10-03 2015-05-14 Minute Spoteam Ltd. System and method for creating synopsis for multimedia content
US9820004B1 (en) * 2013-12-03 2017-11-14 Google Inc. Optimizing timing of display of a video overlay
US20150172787A1 (en) * 2013-12-13 2015-06-18 Amazon Technologies, Inc. Customized movie trailers
US20160042249A1 (en) * 2014-08-06 2016-02-11 Dropbox, Inc. Event-based image classification and scoring

Also Published As

Publication number Publication date Type
US8856636B1 (en) 2014-10-07 grant

Similar Documents

Publication Publication Date Title
Chang The holy grail of content-based media analysis
US20080046925A1 (en) Temporal and spatial in-video marking, indexing, and searching
Sundaram et al. A utility framework for the automatic generation of audio-visual skims
US20100070483A1 (en) Apparatus and software system for and method of performing a visual-relevance-rank subsequent search
US20080046406A1 (en) Audio and video thumbnails
US20090007202A1 (en) Forming a Representation of a Video Item and Use Thereof
US7383508B2 (en) Computer user interface for interacting with video cliplets generated from digital video
US20110075851A1 (en) Automatic labeling and control of audio algorithms by audio recognition
US20080184120A1 (en) Concurrent presentation of video segments enabling rapid video file comprehension
Hua et al. Optimization-based automated home video editing system
US20030234803A1 (en) System and method for automatically generating video cliplets from digital video
US20120039539A1 (en) Method and system for classifying one or more images
US20110239107A1 (en) Transcript editor
US7702014B1 (en) System and method for video production
Kim et al. Understanding in-video dropouts and interaction peaks inonline lecture videos
US8006186B2 (en) System and method for media production
US20140072285A1 (en) Media Summarization
US20050182503A1 (en) System and method for the automatic and semi-automatic media editing
US20140255003A1 (en) Surfacing information about items mentioned or presented in a film in association with viewing the film
US20110029873A1 (en) Methods and Systems for Previewing Content with a Dynamic Tag Cloud
US20080177536A1 (en) A/v content editing
US20040181545A1 (en) Generating and rendering annotated video files
US20120078899A1 (en) Systems and methods for defining objects of interest in multimedia content
US20070292106A1 (en) Audio/visual editing tool
US20140282661A1 (en) Determining Interest Levels in Videos

Legal Events

Date Code Title Description
AS Assignment

Owner name: ADOBE SYSTEMS INCORPORATED, CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:IAMPIETRO, MIKE;SINGH, ARVINDER;SIGNING DATES FROM 20090831 TO 20090921;REEL/FRAME:023264/0629

MAFP

Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551)

Year of fee payment: 4