WO2016203469A1 - A digital media reviewing system and methods thereof - Google Patents

A digital media reviewing system and methods thereof Download PDF

Info

Publication number
WO2016203469A1
WO2016203469A1 PCT/IL2016/050626 IL2016050626W WO2016203469A1 WO 2016203469 A1 WO2016203469 A1 WO 2016203469A1 IL 2016050626 W IL2016050626 W IL 2016050626W WO 2016203469 A1 WO2016203469 A1 WO 2016203469A1
Authority
WO
WIPO (PCT)
Prior art keywords
dmf
annotation
module
additionally
changes
Prior art date
Application number
PCT/IL2016/050626
Other languages
French (fr)
Inventor
Jacob DR. ASSA
Gur-Ze'ev YONATHAN
Voitiz INBAL
Original Assignee
Lookat Technologies Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lookat Technologies Ltd filed Critical Lookat Technologies Ltd
Publication of WO2016203469A1 publication Critical patent/WO2016203469A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/40Information retrieval; Database structures therefor; File system structures therefor of multimedia data, e.g. slideshows comprising image and additional audio data
    • G06F16/43Querying

Definitions

  • the present invention generally relates to the field of editing, producing and reviewing digital media, preferably digital video by providing means and methods for facilitating managing, reviewing and integrating a plurality of digital media files provided by collaborating parties.
  • video and audio data are first captured to hard disk-based systems, or other digital storage devices.
  • the data are either directed to disk recording or are imported from another source (transcoding, digitizing, and transferring). Once imported, the source material can be edited on a computer using any of a wide range of video editing software.
  • Post-production is the digital phase of a video production, and received its name from the tradition of beginning this new workflow at the end of the feature film's principal photography.
  • Today the 'post production' stage can comprise an entire film production or be performed at any time during the production process, or even before shooting begins.
  • the process of media creation is frequently a repetitive process, which encompasses a large number of participants, including several levels of animators, graphic designers, video FX editors, director, producer, art director, script writers, audio editors, agency (for commercials) etc.
  • the process revolves around the daily snapshot of the resulting media. These snapshots are then presented for internal and external review by the directors, managers and stakeholders.
  • the instructions and comments received are then processed by the visual artists toward the next day composition. The process continues with such daily iterations until the approval of the final result.
  • Source files are complicated to transfer due to band width limitation.
  • the participating parties wish to limit the exposure of their source material or internal artifacts, and to share only the final result.
  • DAM digital asset management systems
  • US 20020113803 Al titled: COLLABORATIVE COMPUTER-BASED PRODUCTION SYSTEM INCLUDING ANNOTATION, VERSIONING AND REMOTE INTERACTION, filed: Aug 13, 2001, discloses a system providing a user interface to annotate different items in a media production system such as in a digital non-linear post production system. Parts of the production, such as clips, frames and layers that have an associated annotation are provided with a visual annotation marker. Annotations can be text, freehand drawing, audio, or other. Annotations can be automatically generated. Annotations can be compiled into records, searched and transferred. A state of an application program can be stored and transferred to a remote system. The remote system attempts to recreate the original state of the application program.
  • US2014208220A titled: SYSTEM AND METHOD FOR CONTEXTUAL AND COLLABORATIVE KNOWLEDGE GENERATION AND MANAGEMENT THROUGH AN INTEGRATED ONLINE-OFFLINE WORKSPACE, filed Mar 1, 2012, discloses a computer-implemented online-offline workspace and method for creating, developing, storing, and managing digital content within a contextual and shared knowledge network.
  • the invention includes a central service facility that provides an online platform for the users to work in a context-based and shared knowledge environment through a user interface on a wide range of user access devices.
  • the online platform is embedded with a plurality of applications to allow the user to capture, create, develop, store, process, share, distribute, retrieve, reuse, and manage digital contents containing any one or a combination of the following: text, graphics, audio, video, whole or portions of web-pages and web-links.
  • the invention further includes an end-user facility providing an offline platform that gets synchronized with the online platform upon detection of a secured communication network. This disclosure does not provide comparison tools between versions, nor does it disclose annotating specific features of an image and follow its changed parameters throughout the frames in which the appear. Further the system is based on multiple users working either online in an online editing specific application, or offline and updating online, rather than allowing every user to each operate an independent software tool and locating and recognizing the differences between last versions.
  • US2014289645A titled: TRACKING CHANGES IN COLLABORATIVE AUTHORING ENVIRONMENT, filed: Mar 20, 2013, discloses change tracking and collaborative communication for authoring content in a collaborative environment. Monitored changes, comments, and similar input by the collaborating authors may be presented on demand or automatically to each author based on changes and/or comments that affect a particular author, that author's portion of collaborated content, type of changes/comments, or similar criteria. Change and/or comments notification may be provided in a complementary user interface of the collaborative authoring application or through a separate communication application such as email or text messaging.
  • this disclosure comprises receiving one or more of an edit and a comment associated with a collaboratively created content, by different authors having access to a currently edited file, and does not preform analysis to detect the changes between sequential versions once finalized.
  • a method to be executed at least in part in a computing device for tracking changes in a collaborative authoring environment comprising: receiving one or more of an edit and a comment associated with a collaboratively created content; receiving a request for viewing changes associated with the collaboratively created content; displaying a summary of edits and comments associated with the collaboratively created content; in response to selection of one of the edits and comments, displaying details of the selected one of the edits and comments; and enabling an author or viewing the selected one of the edits and comments to communicate with a co-author responsible for the selected one of the edits and comments.
  • this invention includes tracking changes on a video it does not disclose any specific means of doing so.
  • This invention does not disclose automatically locating and recognizing changes between features of a frame or in a sequence of video frames.
  • this invention does not disclose means of detecting changes following comparison of source material to a derivative version or between rendered versions.
  • US 20140115476 Al titled: WEB-BASED SYSTEM FOR DIGITAL VIDEOS, discloses systems and methods for adding and displaying interactive annotations for existing online hosted videos.
  • a graphical annotation interface allows the creation of annotations and association of the annotations with a video.
  • Annotations may be of different types and have different functionality, such as altering the appearance and/or behavior of an existing video, e.g. by supplementing it with text, allowing linking to other videos or web pages, or pausing playback of the video.
  • Authentication of a user desiring to perform annotation of a video may be performed in various manners, such as by checking a uniform resource locator (URL) against an existing list, checking a user identifier against an access list, and the like.
  • URL uniform resource locator
  • This disclosure does not include tracking changes capabilities of videos, nor does it disclose locating and recognizing changes. Further the discloser does not include following a specific feature throughout the film only of the image in relation to the duration of the annotation, as specified by the
  • a graphical editing interface allows designating one or more videos to assemble into a video compilation.
  • the graphical editing interface additionally allows the association of annotations— specifying, for example, slides, people, and highlights— with portions of the video.
  • annotations alter the appearance of the video compilation when it is played, such as by displaying slides, or text associated with the annotations, along with the video at times associated with the annotations.
  • the associated annotations also enhance the interactivity of the video compilation, such as by allowing playback to begin at points of interest, such as portions of the video for which there is an associated annotation.
  • US 20130145269 Al titled: MULTI-MODAL COLLABORATIVE WEB-BASED VIDEO ANNOTATION SYSTEM, filed: Sep 26, 2012, expressly discloses a video annotation interface that includes a video pane configured to display a video, a video timeline bar including a video play-head indicating a current point of the video which is being played, a segment timeline bar including initial and final handles configured to define a segment of the video for playing, and a plurality of color-coded comment markers displayed in connection with the video timeline bar.
  • Each of the comment markers is associated with a frame or segment of the video and corresponds to one or more annotations for that frame or segment made by one of a plurality of users.
  • Each of the users can make annotations and view annotations made by other users.
  • the annotations can include annotations corresponding to a plurality of modalities, including text, drawing, video, and audio modalities.
  • the present invention provides a system, useful for reviewing a plurality of sequential Digital Media Files, (DMF), comprising: (a) a receiving module, configured to receive at least one first DMF and at least one second DMF; (b) an analysis module configured to detect changes between at least one first DMF and one second DMF; (c) a recognition module configured to recognize at least one feature comprising one or more the detected changes along the DMF; and, (d) a non- transitory computer readable storage medium (CRM) operatively in communication with the receiving module, the analysis module and the recognition module, the CRM having computer executable instructions that configure one or more operatively coupled processors to perform the instructions comprising: (i) receive at least one first and at least one second DMFs by means of the receiving module; (ii) compare the visual and/ or audio content of each of the DMF and detect visual and/or audio changes between the DMFs by means of the analysis module; (iii) recognize features comprising the detected changes by means of the recognition module; (iv) log the
  • the analysis module is configured to detect visual and/ or sound changes between the DMFs at the level of: the entire DMFs, at least a portion of the DMFs, the DMF shot level, the DMF frame level, the DMF shot and or audio track transition level, and any combination thereof.
  • the recognition module is configured to generate at least one fingerprint of a recognized feature comprising tolerances of feature characteristics enabling recognition.
  • the system is configured to generate at least one retrievable memory log comprising details of recognized features of at least one DMF.
  • the analysis module is configured to perform at least one of the following instructions: (a) determine shot separation within each DMF; (b) determine the location along the timeline of each shot; (c) determine the time length of each shot; (d) determine the visual characteristics of each DMF frame; (e) determine the sound characteristics of each DMF frame; (f) determine at least one shot transitions characteristics of each DMF; (g) determine the time length of the entire DMF for each DMF; (h) match the frames, shots, or both between at least one first DMF and one second DMF; (i) determine frame correspondence between at least one first DMF frame and at least one second DMF frame; (j) determine shot correspondence between at least one first DMF shot and at least one second DMF shots; (k) determine sound characteristic
  • the analysis module further comprises at least one matching module operatively in communication with the CRM, configured to match corresponding portions of the DMFs;
  • the CRM further comprising computer executable instructions that configure one or more to perform the instructions comprising: (a) receiving at least one first and at least one second DMFs by the receiving module; (b) distinguishing between the different shots to determine shot separators by the matching module in each DMF; (c) determining shot correspondence between the shots in the first DMF and the second DMF, to determine matched shots by the matching module; (d) determining frame correspondence between the frames in the first DMF and second DMF, within the matched shots to determine matched frames by the matching module; (e) detect differences between the DMFs in level selected from a group consisting of: entire DMF level, frame level, shot level, timing level, transition level, sound level and any combination thereof, by the analysis module; and, (f) logging the detected changes and/ or detected features comprising the changes.
  • the system further comprises at least one annotation module operatively in communication with the processor, the annotation module is configured to generate at least one annotation in association with at least one DMF feature. It is another object of the present invention to disclose the system as defined in any of the above, wherein the annotation module is configured to generate one or more of the annotations in a manner selected from a group consisting of: automatically, manually, according to user defined criteria, and any combination thereof.
  • annotation module is configured to accept one or more annotations from at least one external source and associate them with defined features in the RDMF.
  • each generated annotation of the detected change comprises at least a portion of detected changed information.
  • annotation module is further configured to index at least one detected changes, index at least one features comprising detected changes, or both.
  • annotations comprising at least one selected from a group consisting of: the logged changes, the recognized features comprising changes, the changes information, indexing of the annotation content, comment(s) inserted by at least one user, link(s), images, changes history, and any combination thereof.
  • annotation module is configured to edit at least one annotation by performing at least one selected from a group consisting of: adding annotation, deleting annotation, changing annotation details, manipulating graphical representation of the annotation, editing annotation content, editing annotation location on screen, editing annotation location along timeline, edit annotation image, connecting the annotation(s) to at least one second feature, and any combination thereof.
  • annotation module is further configured to allow a user by means of a user interface, to edit at least one of the annotations.
  • annotation module is configured to generate differential graphical representation of annotations relating to their version source and/ or their place in the changes hierarchy.
  • annotation module is configured to generate a task list comprising one or more of the annotations
  • annotation module is configured to be updated following at least one event selected from a group consisting of: detecting a change in the DMF, editing of the annotation by at least one user, and any combination thereof.
  • annotation module is configured to forward the task list to at least one third party
  • RDMFs final reviewed digital media files
  • annotation module is configured to index the annotations according to at least one selected from a group consisting of: user defined information, the changes information, the feature comprising change, DMF version, timing within DMF, the annotation generation date, the annotation editing date, the annotation tag associated by a user, and any combination thereof.
  • the present invention provides a computer-readable storage medium having stored therein a computer program loadable into a processor of a communication system; the communication system comprising a communication network attached to one or more end users; wherein the computer program comprises code adapted to perform a method for reviewing a plurality of sequential Digital Media Files (DMFs); the method comprising: (a) receiving at least one first and at least one second DMFs from at least one user; (b) comparing the visual and/ or audio content of each of the DMFs and detect visual and/or audio changes between the DMFs by means of the analysis module; (c) recognizing one or more features comprising the detected changes by means of the recognition module; (d) logging the detected changes information and the recognition information; (e) generating at least one reviewed digital media file (RDMF) comprising the second DMF configured to present at least a portion the detected changes information and the feature recognition in comparison to at least one first DMF; and, (f) forwarding the at least one RDMF to at least one recipient; (g) wherein the
  • the step of detecting changes additionally comprises the steps of: (a) distinguishing between the different shots to determine shot separators in each DMF; (b) determining shot correspondence between the shots in the first DMF and second DMF, to determine matched shots; (c) determining frame correspondence between the frames in the first DMF and second DMF, within the matched shots to determine matched frames; and, (d) detecting differences between the DMFs in level selected from a group consisting of: entire DMF level, frame level, shot level, timing level, transition level, sound level and any combination thereof, according to the correspondence.
  • annotations comprising at least one selected from a group consisting of: the logged changes, the recognized features comprising changes, the changes information, indexing of the annotation content, comment(s) inserted by at least one user, link(s), images, changes history, and any combination thereof.
  • RDMFs final reviewed digital media files
  • the present invention provides a computer implemented method for reviewing of Digital Media Files (DMF) received from at least one user-end, via a communication system, to at least one server computer comprising the steps of: (a) providing: (i) the server comprising at least one memory storage operatively coupled with at least one processor; (ii) at least one receiving module operatively in communication with the server configured to receive first and at least one second DMFs from at least one user; (iii) at least one analysis module operatively in communication with the server configured to detect visual and/or audio changes between the DMFs; and, (iv) at least one recognition module operatively in communication with the server configured to recognize features comprising changes between the DMFs; (b) receiving at least one first and at least one second DMFs by means of the receiving module; (c) comparing the visual and/ or audio content of each of the DMFs and detect visual and/or sound changes between the DMFs by means of the analysis module; (d) recognizing features comprising the detected changes by means of the recognition module;
  • said step of detecting changes by said analysis module additionally comprising the steps of: further providing at least one matching module, operatively in communication with said server, configured to match corresponding portions of said DMFs; further wherein said server to perform said instructions comprising: (a) receive at least one first and at least one second DMFs by said receiving module; (b) distinguish between said different shots to determine shot separators by said matching module in each said DMF ; (c) determine shot correspondence between said shots in said first said DMF and second said second DMF, to determine matched shots by said matching module; (d) determine frame correspondence between said frames in said first said DMF and second said second DMF, within said matched shots to determine matched frames by said matching module; (e) detect differences between said DMFs in level selected from a group consisting of: entire DMF level, frame level, shot level, timing level, transition level, sound level and any combination thereof, by said analysis module; and, (f) log said detected changes and/ or
  • annotations comprising at least one selected from a group consisting of: the logged changes, the recognized features comprising changes, the changes information, indexing of the annotation content, comment(s) inserted by at least one user, link(s), images, changes history, and any combination thereof.
  • RDMFs final reviewed digital media files
  • the present invention provides a computer implemented method for detecting and annotating changes between two sequential versions of Digital Media Files (DMFs), the DMFs each comprising one or more shots, each shot comprising one or more frames, the method characterized by the steps of: (a) providing: (i) at least one receiving module configured to receive first and at least one second DMFs; (ii) at least one matching module configured to match corresponding portions of the DMFs; (iii) at least one analysis module configured to detect visual and/or audio changes between the DMFs, (iv) at least one recognition module configured to recognize features comprising changes between the DMFs; (v) at least one annotation module configured to generate at least one annotation; and, (vi) a non-transitory computer readable storage medium (CRM), operatively in communication with the receiving module, the analysis module, the matching module, annotation module and the recognition module, the CRM having computer executable instructions that configure one or more operatively coupled processors to perform the instructions comprising: (1) receive at least one first and at least one second D
  • correspondence of a shot and /or a frame comprises correspondence of at least one characteristic selected from a group consisting of: sound characteristics, visual characteristics, timing characteristics, transition characteristics, textual characteristics, 3D object characteristics, 2D object characteristics, special effects characteristics, and any combination thereof,
  • annotations comprising at least one selected from a group consisting of: the logged changes, the recognized features comprising changes, the changes information, indexing of the annotation content, comment(s) inserted by at least one user, link(s), images, changes history, and any combination thereof.
  • RDMFs final reviewed digital media files
  • the present invention provides a detection and annotation system, useful for detecting and annotating changes between two sequential versions of Digital Media Files (DMFs), the DMFs each comprising one or more shots, each shot comprising one or more frames, the system comprises: (a) at least one receiving module operatively in communication with the server configured to receive first and at least one second DMFs; (b) at least one matching module configured to match corresponding portions of the DMFs; (c) at least one analysis module configured to detect visual and/or audio changes between the DMFs; (d) at least one recognition module configured to recognize features comprising changes between the DMFs; (e) at least one annotation module configured to generate at least one annotation; and, (f) a non-transitory computer readable storage medium (CRM) in communication with the receiving module, matching module, the analysis module, the recognition module, and the annotation module; the CRM, operatively coupled to at least one processors, having computer executable instructions that configure one or more to perform the instructions comprising: (i) receiving at least one first and at least
  • correspondence of a shot and /or a frame comprises correspondence of at least one characteristic selected from a group consisting of: sound characteristics, visual characteristics, timing characteristics, transition characteristics, textual characteristics, 3D object characteristics, 2D object characteristics, special effects characteristics, and any combination thereof.
  • the analysis module is configured to detect visual and/ or sound changes between the DMFs at the level of: the entire DMFs, at least a portion of the DMFs, the DMF shot level, the DMF frame level, the DMF shot and or audio track transition level, and any combination thereof.
  • recognition module is configured to generate at least one fingerprint of a recognized feature comprising tolerances of feature characteristics enabling recognition.
  • the analysis module is configured to analyze the visual and/or audio content of each of the DMFs by performing at least one of the following instructions: (a) determine shot separation within each DMF; (b) determine the location along the timeline of each shot; (c) determine the time length of each shot; (d) determine the visual characteristics of each DMF frame; (e) determine the sound characteristics of each DMF frame; (f) determine at least one shot transitions characteristics of each DMF; (g) determine the time length of the entire DMF for each DMF; (h) match the frames, shots, or both between at least one first DMF and one second DMF; (i) determine frame correspondence between at least one first DMF frame and at least one second DMF frame; (j) determine shot correspondence between at least one first DMF shot and at least one second DMF shots; (k) determine sound characteristic correspondence between at least one first DMF and at least one second said DMF frames, shots, or both; (1) determine visual characteristics correspondence between at least one of the DMFs by performing at least one of the following instructions: (a) determine shot separation
  • system further comprises at least one annotation module operatively in communication with the processor, the annotation module is configured to generate at least one annotation in association with at least one DMF feature.
  • annotation module is configured to generate one or more of the annotations in a manner selected from a group consisting of: automatically, manually, according to user defined criteria, and any combination thereof.
  • the annotation module is configured to generate one or more of the annotations in a manner selected from a group consisting of: automatically, manually, according to user defined criteria, and any combination thereof. It is another object of the present invention to disclose the system as defined in any of the above, wherein the annotation module is configured to accept one or more annotations from at least one external source and associate them with defined features in the RDMF.
  • each generated annotation of the detected change comprises at least a portion of detected changed information.
  • annotation module is further configured to index at least one detected changes, index at least one features comprising detected changes, or both.
  • annotations comprising at least one selected from a group consisting of: the logged changes, the recognized features comprising changes, the changes information, indexing of the annotation content, comment(s) inserted by at least one user, link(s), images, changes history, and any combination thereof.
  • annotation module is configured to edit at least one annotation by performing at least one selected from a group consisting of: adding annotation, deleting annotation, changing annotation details, manipulating graphical representation of the annotation, editing annotation content, editing annotation location on screen, editing annotation location along timeline, edit annotation image, connecting the annotation(s) to at least one second feature, and any combination thereof.
  • annotation module is further configured to allow a user by means of a user interface, to edit at least one of the annotations. It is another object of the present invention to disclose the system as defined in any of the above, wherein the annotation module is configured to generate differential graphical representation of annotations relating to their version source and/ or their place in the changes hierarchy.
  • annotation module is configured to forward the task list to at least one third party
  • RDMFs final reviewed digital media files
  • annotation module is configured to index the annotations according to at least one selected from a group consisting of: user defined information, the changes information, the feature comprising change, DMF version, timing within DMF, the annotation generation date, the annotation editing date, the annotation tag associated by a user, and any combination thereof.
  • Fig. 1A is a schematic diagram of an embodiment of the digital media reviewing system
  • Fig. IB is a schematic diagram of an embodiment of the digital media reviewing system
  • Fig. 2 is a schematic diagram of an embodiment of a method of digital media reviewing
  • Fig. 3 is a schematic diagram of an embodiment of a system for implementing a method of digital media reviewing
  • Fig. 4 is a schematic diagram of an embodiment of the annotating and change detecting method.
  • Fig. 5 is a schematic diagram of an embodiment of the annotating and change detecting system.
  • the present invention provides a system and methods configured to create a flowing production process that will interact effectively with multiple parties performing editing and or reviewing of digital media and result in a professional media editing and/or reviewing system. This will enable each party to use their favorite editing means (any media software) media production software in order to pursue their artistic and technological interests without limitations to sharing and re-editing the media.
  • the essence of the present invention is to provide a system and a service that would support the production process while providing a highly effective versioning and reviewing capabilities.
  • the present invention complements existing automation tools by supporting the specialized content creation process, facilitating an easy to manage, integrate and collaborate system and methods, between the different users contributing to the creation of a digital media file, such as between the participating artists, supervisor and approval factors.
  • the term 'digital media file' interchangeably refers hereinafter to media encoded digitally including, but not limited to: film, video, digital video, movie, cinema, picture, motion pictures, moving image, television programs, radio programs, advertisements, audio recordings and /or compilations, photography, digital art, clip(s), music clips, animation, record, tape, digital presentation, and any combination thereof.
  • the digital media is preferably a film.
  • a film comprises at least one 'shot', each shot comprises at least one 'frame', typically a sequential series of frames.
  • a shot is a continuous piece of video or film footage (e.g. between pressing "record” and "stop", or between two cuts).
  • a frame is one of the many still images which compose the complete moving picture, typically in digital media it is a rectangular raster of pixels, either in an RGB color space or other color space.
  • the means and methods of the present invention are aimed at managing and reviewing the post production stage.
  • Each contributor of a collaborative group of people can generate of contribute at least a portion of the post production stage.
  • Further in the scope of the present invention is integrating, and reviewing these portions and portion versions can be integrated, reviewed, and tasks managed in the process of the digital media final file creation.
  • post production refers hereinafter to process and means directed at editing and/ or reviewing digital media. These can include:
  • a filter e.g. artistic, blur, sharpen, emboss, sketch, and etc.
  • visual special effects - mainly computer-generated imagery
  • this can be such as 3D and/ or 2D animations, 2D and/or 3D images, and/or image portions and or image layers.
  • This can also include image processing means such as applying filters that enhance/ distort /manipulate / crop/ size change/ zoom and etc., at least a portion of the image.
  • - Editing the soundtrack of the video composing, recording, trim, cut, split, merge, compress, delay, de-noise, de-spike, parametric gain, reverb, invert loop, normalizing, pitch shifting resampling, time based interpolation, time stretch, mix different tracks and edit the soundtrack.
  • - composing the final timeline of the film - including but not limited to: deciding the ordering and placement of the different shots, determining the begging and the ending of the film.
  • 'feature' in reference to film / video / digital media refers herein after to any portion of the visual and/ or audio content of a film such as, but not limited to: at least a portion of the background image, at least a portion of the foreground image, animation, object, 3D animated object, 2D object, 3D object, character, titles, subtitles, actor, color, timespan, special effect, audio, shot, background audio, main track audio, noise, speaking sound, music, narration audio, lighting, artwork, angle of view, timing (of shots, objects, frames, image manipulations and etc.) ,and any combination thereof.
  • the term 'source material' refers hereinafter to originating source data of which the film is made, such as the 3D object files, the original layering , relation to each object separately, the source video material, the original sound track file, and the special effects scripts, including the metadata files of specifications and providing unique access to different portions and different parameters of each feature/object individually and/or together, and of the frame as they were defined and composed by artist such as the such as layers, objects effects, background, audio track audio effects and etc.
  • the 'rendered version' of a file refers hereinafter to a version of a digital media file "flattened" into a final representation of the material, containing the final appearance and adjustments made up to that editing stage. In a rendered format the separations of objects and different objects is no longer present, and the file is related to as in its entirety.
  • Metadata' refers hereinafter to "data about data”. Representing, about the design and specification of data structures and/or “data about the containers of data”; and descriptive metadata about individual instances of application data or the data content.
  • the digital media file e.g. video files
  • this can refer to such as timecode, localization, take number, name of the clip/ object, layer, coordinates, track, object properties and etc.
  • metadata are attached to the clip.
  • the metadata can be attached automatically and/or manually.
  • detection of changes, recognition of changes, annotations, or any other data relating to a digital media file can be configured or supported with a metadata file.
  • the metadata file comprises annotation(s) in association with the digital media file.
  • the annotation can include, but are not limited to: changes information, comments, notification of changes, indexes of changes, location of changes, detected features of detected changes, one or more characteristic in with the change is manifested, change date, change provider, link to another location in the digital media, link to an external program and/or memory, and any combination thereof.
  • in communication interchangeably refers hereinafter to any form of exchanging information between components of present invention and/ or between components of the present invention and external devices and/ or systems.
  • a computer interchangeably refers herein to any an electronic device designed to accept data, perform prescribed mathematical and logical operations at high speed, and display the results of these operations device. It responds to a specific set of instructions in a well- defined manner and it can execute a prerecorded list of instructions (a program).
  • a computer typically comprises a CPU, a memory (Computer readable media), and a user interface (input device e.g. keyboard, buttons, joystick, touch screen, control panel; output device e.g. screen, indicator and etc.), and can include a transmitting/ receiving module.
  • one or more of the methods of the present invention is embodied as at least one of: a locally installed application, a web application, a hosted service, any remote access communication system, (utilizing a local and/or remote memory system) and the computing device on which it is installed is one of: a server, a desktop computer, a laptop computer, a tablet, a smart whiteboard, a smart phone, a smart watch, a video camera, a dedicated video editing device, a hand held device, microprocessor based or programmable consumer electronics, and any combination thereof.
  • Embodiments can be implemented such as a computer implemented process, a computing system, a communication system, and/ or as an article of manufacture as a computer program product, and/ or computer readable media. Additionally or alternatively, the means and methods of the present invention can be implemented in hardware, software and any combination thereof. Additionally or alternatively, the means and methods of the present invention can be implemented on a single computing device and/ or on a plurality of computing devices, utilizing memory locally and/ or remotely. Additionally or alternatively, a server can be implemented in a network environment or as a virtual server in a computing system and/ or device.
  • processor central processing unit
  • CPU central processing unit
  • a computer can have more than one CPU; this is called multiprocessing.
  • the processor can be such as a microprocessors, multi- core processors a system on a chip (SoC), array processor, vector processor and etc.
  • SoC system on a chip
  • the CPU is typically connected to a memory unit (storage unit, a unit of a computer or an independent device designed to record, store, and reproduce information) configured to store and extrude information in various forms (e.g. a database).
  • Computer readable media (CRM), interchangeably refers hereinafter to a medium capable of storing data in a format readable by a mechanical device (automated data medium rather than human readable).
  • machine-readable media include magnetic media such as magnetic disks, cards, flash memory, tapes, and drums, punched cards and paper tapes, optical disks, barcodes and magnetic ink characters.
  • Common machine-readable technologies include magnetic recording, processing waveforms, and barcodes.
  • Optical character recognition (OCR) can be used to enable machines to read information available to humans. Any information retrievable by any form of energy can be machine-readable.
  • Communication system interchangeably refers hereinafter to any system providing the passage of information from one end to at least one second end and vice versa.
  • third party interchangeably refers herein to any system, user, device, or individual. For example this includes a system or device logging the different task lists for statistical analysis, or work efficiency checkups, a back-up system, a user (e.g. artist, editor, manager, producer, stakeholder, expert, commissioner, client and etc.), a printing device, an external database,
  • the third party is typically in communication with said system and/ or any modules, portions or parts thereof, preferably said annotation module of the present invention.
  • the term 'sound' refers hereinafter to any portion of a digital media file triggering a sensation produced by stimulation of the organs of hearing by vibrations transmitted through the air or other medium.
  • This further includes the particular auditory effect produced by a given cause: such as: music, speech, synthetic sound, vocal utterance, animal sound, representation of a naturally occurring sound, beat, rhythm, tone, noise, any auditory effect; any audible vibrational disturbance, digitally recorded signal, non-digital recorded signal, audio relating to, or employed in the transmission, reception, or reproduction of sound; relating to frequencies or signals in the audible range, and any combination thereof.
  • Locating changes between two production versions is not a simple task as often the difference between versions may occur in different forms and with different side-effects both temporal and spatial. It is further in the scope of the present invention to locate and identify changes including: changes in the time level, changes in the shot level, changes in the frame level, changes in the sound level, changes in the visual level, changes in the transition between shots (transition level), changes in the layers, special effect changes, 2D animation changes, 3D animation changes, textual changes, and any combination thereof.
  • Changes in the time level include, changes in time related characteristics such as, but not limited to: the over-all time span of: a shot, a scene, a user defined portion of the film, the entire film; timing in the film and duration of: a transition between two shots, a transition between two animations, a shot, an appearance of at least one feature, a defined musical track, a defined sound segment, a user defined frame point of appearance, a user define audio segment, the duration of an audio (e.g.
  • the frame rate per second This change is manifested by at least one of - trimming at least a portion of the film, shot and/ or transition, extending at least a portion of the film, shot and/ or transition, removing at least one frame, adding at least one frame, duplicating at least one frame, insertion of longer and/ or shorter transitions, changes in the audio content in reference to the time line (e.g. insertion, cutting expanding compressing at least a segment of audio, addition of a track), and any combination thereof.
  • Changes in the shot level include changes in any characteristic of the shot such as, but not limited to: change in the timing of the shot (the timing that the shot stars and /or ends , trimming and/ or extending at least a portion of a shot by for example adding and/or removing at least one frame resulting in changing the footage duration by spreading/compacting the frames in it, and /or ending with the same final film length), visual changes in the entire shot or in at least a portion of the shot, sound level changes in the entire shot or in at least a portion of the shot, the swapping of shots between locations along the film timeline (changing the order of the shots), deleting at least one shot, adding at least one shot, special effect changes applied to at least a portion of a shot or an entire shot, changes in the sound of at least a portion of a shot, changes in 2D and/ or 3D objects within at least a portion of a shot or an entire shot, changes in the FPS: Frames Per Second (the number of video or film frames which are displayed each second), and any combination
  • Changes between shots can be further identified as shots being characterized into shot types following composition analysis, such as: EWS (Extreme Wide Shot), VWS (Very Wide Shot), WS (Wide Shot), MS (Mid Shot), MCU (Medium Close Up), CU (Close Up), ECU (Extreme Close Up), Chokercut-in, Cut-In, CA (Cutaway), Two-Shot, (OSS) Over-the- Shoulder Shot, Noddy Shot, Point-of-View Shot (POV), Weather Shot, aerial shot, bird's-eye shot , the low- angle shot, over the shoulder shot, point of view shot, reverse shot, freeze frame shot, the insert shot, and other shots as known in the art of film making.
  • Changes in the frame level include changes in any frame characteristic when compared to a different media version such as, but not limited to: changes in the number of times a frame appears, frame deletion, frame addition, swapping in the location of a frame, changes in the visual content of at least a portion of the frame, changes in the audio (sound) content of the frame, frame duplication, special effects on at least a portion of the frame, layer changes within a frame, appearance/ disappearance of a 2D or 3D object within a frame, a visual change of a 2d and/or 3D object within a frame, text change, and any combination thereof.
  • Changes in the sound content include changes in any sound related characteristic such as, but not limited to: the sound track, changes in the sound pitch, changes in the sound tone, changes in the sound transitions, changes in the music, changes in the volume, changes in the relation between the sound tracks (for example dominance of music over speaking and etc.), changes in at least one sound track duration, changes in special effects of sound (e.g, applied sound filter), changes in the timing of a specific sound and/ or specific sound track, and/ or at least one specific sound portion, deletion of at least a portion of a sound track, changing the volume of at least a portion of a sound track (decibel change and/ or gain of a shot audio, timing of a sound peak within a defined portion, Reverberation change, identifying a flanging effect, amplified or attenuate signal change, stretching sound by addition and or duplications of such as notes, rhythms, transitions, loops, and etc., mixing at least a portion of at least one sound track, changes in noise ratio, insertion of synthetic
  • Changes in the visual content include changes of any visual characteristic of at least a portion of a frame, such as, but not limited to changes in: color (hue, RGB or CMYK content and/or ratio, saturation), resolution, contras, luminance, applied filter (e.g.
  • zoom, texture, change of a predetermined amount of pixels in a predefined area can be considered as a change, trimming of at least a portion of an object (2D, 3D), at least a portion of a frame, at least a portion of a special effect addition, transformation of at least apportion of the frame image (e.g. crop, rotate, scale, skew, flip, duplicate, and others as known in the art), superimposing at least two images and / or image portions to generate one new image, images produced using bracketing (e.g. exposure, depth of field, ISO, white balance and etc.) combined to create a high dynamic range image that produces uniting different portions of the image having different parameters into one image, and any combination thereof.
  • bracketing e.g. exposure, depth of field, ISO, white balance and etc.
  • Changes in at least one transition between shots include changes in any transition characteristic such as, but not limited to: the mixing of the sound between shots, the blending of images between shots, the occurrence of frames that are transition defined frames, the transition timing (length along the time line, start point and end point), the sound of at least one transition, special effects, location in timeline, swapping between transitions, deletion of a transition , trimming or extending a transition, visual change of at least a portion of a transition, sound content change of at least a portion of a transition, and any combination thereof.
  • 3D animation Most of the videos created today, either 3D animation, live footage, Motion graphics or a combination of these, consists of compositing of 2 or more layers (usually the number of layers are significantly higher than 2).
  • a basic composite will always include Foreground Layer blended over a Background Layer through the foreground's alpha channel. It is very common to change one or more of these layers during the post-production process.
  • each of these layers presents objects/entities such as people, cars, sets, and many others, a change in the appearance of these entities, often generates a change in one or more layers.
  • Changes in the layers include but are not limited to, background changes, foreground changes, 3D viewpoint changes, blending changes, opacity changes, swapping of background and foreground, swapping between the order of at least two features (e.g. 2D object, 3D object, background, text, and etc.), view point changes, visual changes to at least one layer, and any combination thereof.
  • special effects referrers herein to digital illusions used in a film, to simulate the events. Changes in special effects include changes in any special effect characteristics, such as, but not limited to: the appearance of at least one frame or any predetermined number of frames, or sound in reference of at least one frame or any predetermined number of frames.
  • changes include changes such as split screen, zoom, applying a filter, quality enhancing (color correction, adjust, sharpen, blur, and 'auto-filter'), add a feature (in the background/foreground and/or blended into a layer, such as explosions, smoke, laser lighting, ), special effects for video (chroma key for changing video background, mosaic, old movie/sepia filter, diffuse, add / remove noise, transformation (crop, rotate, distort, flip, Picture in Picture), superimposition of two or more frames and/ or sound tracks and /or videos and/or shots to create a single frame and/ or image and/or film and/or shot, morphing, bullet time, dolly zoom, perspective change, optical effects, and others as known in the art of special effects, and any combination thereof.
  • Textual changes include changes in any text related characteristic such as but not limited to: the change of at least a portion of a text content, color, size, position, timing, font, and any combination thereof, as part of a frame, as subtitles, as graphical 2D or 3D object, as part of a moving object, and any combination thereof. This can be accomplished by further employing any text recognition module as known in the art in combination with visual comparison between different video versions.
  • change information refers herein after to any data associated with a change between different digital media versions. This information includes, but is not limited to: location of the changes (in the level of timing, shot, frame, pixel), a representation of the appearance before the change, the characteristic changed, the user related to the change, annotation related to the change, feature details, recognition devised feature parameters, feature fingerprint, and any combination thereof.
  • the recognition process would significantly improve the overall system by providing capabilities such as: -Detecting a certain feature that was changed. This occurs when a certain scene undergoes a significant change such as 3D model/animation/texture/lighting, Camera angle/animation or matte painting changes.
  • a lineage to the source material can be created by using plugins to the editing software, which examines the composite and mask details for each layer, during the compositing. This would allow to create a reasonably dense pixel assignment to originating elements, and thus store the lineage relation.
  • a video production is a process of content transformations. Taking 3D Objects, animation scripts, camera footage, titles, and other elements, and rendering them together to generate the final video content. Therefore, the transformation from the final video content back to the original elements, is not trivial.
  • the present invention further provides generation of especially designed lineage from the elements to their final rendition so that any references to areas in the final frame, can be traced back to their original elements. This lineage relation between elements and final frame, is indifferent to the originating encoding, video production system, format, and source material.
  • the lineage generation process begins after locating and recognizing changes between at least two media versions.
  • the changes are for example logged in a retrievable manner (in such as a dedicated database or file) and referenced to the specific reviewed version generated.
  • the changes logged are such as: the change itself, the feature changed before application of change (as in the first digital media file), the timing on the timeline of the change portion, the coordinates of changed portion, the date and time of generation of the reviewed digital media file, the date and time of each received digital media file, the user details of each file, a small icon graphically representing the change (such as a portion of the changed image in its previous appearance), any comments or annotations added to the change previously and are supplied in connection to the file.
  • the new located and recognized changed features are logged in the same database of file. Importantly, a matching is made between the different features in the previous logging and the new logging such that they are indexed or tagged sequentially as belonging to the same feature in a specific order.
  • the accumulating changes logged can be presented in such as a separate file, as a task list, a visual representation on a digital media file or on the reviewed media file. Additionally or alternatively the hierarchy of changes can be presented in layers on the actual related frames, and/ or as links to presenting a change.
  • the changes can be presented in a dedicated user interface in connection to the presentation of the media file (in such as an on screen tile / menu / widget/ icon/ link, still image/ configuration). Additionally or alternatively, the changes can be presented layered on-top at least a portion of the related frame. Additionally or alternatively, the changes are configured to be moved by for example dragging and dropping to another portion of the digital media file (whether in the specific frame and/or into another frame(s) and/or into another file).
  • location and recognition capabilities provide the following functionality during the version comparison:
  • the present invention solution is based on a two stage approach, the first stage is locating the changes, then recognition of the changes.
  • Professional editing software programs known in the art provide recording the editor's decisions in an edit decision list (EDL) that is exportable to other editing tools.
  • EDL edit decision list
  • the present invention additionally provides generation of a list of annotation as a task list, as a metadata file transferred with the digital media file. This list can provide features including: editing of the task list; assigning different priorities to the tasks (that can be manifested by for example different graphical representation); deleting/ adding an annotation; searching for an annotation by feature type, time of annotation, portion of time line it appears, author, and etc.
  • the annotation can further include but not limited to, comments (in any combination comprising at least one of: text, image, icon, link), links to other locations in the same or different digital media files, links to other annotations, links to external programs and/or websites, images; image, icons, link and /or text representing a recognized feature and /or representing detected change.
  • Locating the changes relies on creating a correspondence between frames (and areas within the frame) in two sequential versions. This is accomplished by including for example the following steps:
  • the difference between the frames is measured using pixel-wise distance, and a GIST descriptor to speed up the comparison, and to provide a more robust detecting of changes, despite of moving objects, lighting and viewpoints in the video content. Large difference between the two frames, indicate the transformation to a new shot.
  • a dynamic threshold algorithm which adjust the threshold according to other close-by frame differences. This method both localizes the shot transformation, and is sufficiently robust to withstand different shot transitions, and compression originated differences.
  • Determining shot correspondence is done for example by the following steps: defining a shot as a sequence of frames, between two consecutive detected shot separators; characterizing each shot according to a descriptor of its first and last frame.
  • the descriptor is comprised on the GIST descriptor of these frames, together with their pixel details (pixel values, and color histograms).
  • bipartite matching algorithm (known in the art as the Hungarian algo) is used, which matches between the different shots.
  • Hungarian algo an iterative hierarchical approach is applied which first determines matching only with a good fit of the shot signatures, and after that stage, encourages shots matching which follow the same linear order of the two revisions. For example, assuming revision A with shots Al, A2, A3, and revision B with shots Bl, B2, B3 assuming there's a good match between Al and Bl, and A3 and B3 on the first iteration. This encourages the match of A2 to B2 on the following iterations, even with worse matching distance between the two.
  • shot frame length and their current location within the final content, together with additional shot frames statistics, such as color histograms.
  • Determine frame correspondence within matched shots is accomplished by examining the matching shots to find an exact correspondence.
  • the set of shot separators is used as a bases for an adaptation of an iterative dynamic time warping (DTW) algorithm.
  • DTW dynamic time warping
  • This algorithm finds the best frame correspondence which considers also their sequence.
  • a Sakoe-Chiba band is optionally used during the calculation.
  • This method also allows to detect and match cases where frames have been added/deleted, or changed their speed.
  • Determining area correspondence within matched frames is accomplished by for example methods using inter and intra frame patch distances to build a dynamic threshold which is both robust, and sensitive enough for changes in the signatures.
  • an invariant feature detectors + descriptors both area features and point features
  • SIFT or HOG area features and point features
  • the combination of these features often referred to as bag-of-features within a rectangular patch as a signature of the patch.
  • the signature enable the detection of a similar patches in the corresponding frames (of the other version), and matching similar patches within the following /preceding frames (of the same version).
  • To detect the matching area of a given patch we apply a set of matching methods, according to an increasing computational effort, when a match is found, the process stops. When not match is found the next example can be used:
  • CV trackers such as Median-flow, or similar
  • CV trackers such as Median-flow, or similar
  • Matching sound correspondence is used in parallel and/ or at least partially sequential to the processes described above. For example a DTW on the audio track frequency spectrum is applied, to define a matching between audio signals. According to user specifications, it is possible to either enforce matching of the frame matching to the audio matching, or allow both to co-exist in cases where they do not agree on a match. Audio matching can begin by matching of the sound fingerprint of at least a portion of the soundtrack (e.g. a finger print based on time, frequency, noise and intensity of sound signal) while allowing at least one free parameter such as a range in time to allow location flexibility in changed items. The parameters can be kept in a Hash table and compared.
  • the sound fingerprint of at least a portion of the soundtrack e.g. a finger print based on time, frequency, noise and intensity of sound signal
  • the parameters can be kept in a Hash table and compared.
  • Another option is to allow no free parameters but to find areas that match above a certain predefined threshold of matching point of the fingerprint.
  • Other known in the art means can be overlap-analysis detects overlap in two audio files; waveform-compare; sound-match detects occurrences of a smaller audio file within a larger audio file, and utilizing signal processing technics such as cross-correlation (a measure of similarity of two series as a function of the lag of one relative to the other), matching the dynamic range of the decibel and/ or frequency at least a portion of the audio as outputted by the editing and rendering audio system, comparison of a defined harmony, a defined rhythm or beat, identifying and comparing soundless frames and/ or shots.
  • cross-correlation a measure of similarity of two series as a function of the lag of one relative to the other
  • a system useful for reviewing a plurality of sequential Digital Media Files, (DMF), comprising: (a) a receiving module (110), configured to receive at least one first DMF and at least one second DMF; (b) an analysis module (120), configured to detect changes between at least one first DMF and one second DMF; (c) a recognition module (130), configured to recognize at least one feature comprising one or more the detected changes along the DMF; and, (d) a non-transitory computer readable storage medium (CRM) (150) operatively in communication with the receiving module (110), the analysis module (120) and the recognition module (130), the CRM (150) having computer executable instructions that configure one or more operatively coupled processors to perform the instructions comprising: (i) receive at least one first and at least one second DMFs by means of the receiving module; (ii) compare the visual and/ or audio content
  • a system as described above wherein the system is configured to present at least a portion of the changed features hierarchy and/or history in a selected from a group consisting of: at least a portion of a user interface, in an external editing software, graphically presented in the RDMF, in a separate file, in a file configured as task list, and any combination thereof.
  • a system as described above wherein the analysis module is configured to detect visual and/ or sound changes between the DMFs at the level of: the entire DMFs, at least a portion of the DMFs, the DMF shot level, the DMF frame level, the DMF shot and or audio track transition level, and any combination thereof.
  • a system as described above wherein the recognition module is configured to generate at least one fingerprint of a recognized feature comprising tolerances of feature characteristics enabling recognition.
  • a system as described above wherein the processor is configured to relate at least a portion of the information of at least one detected change to all of the changed feature occurrences recognized by the recognition module in the same the DMF.
  • a system as described above wherein the processor is configured to search and optionally associate an annotation and/ or at least a portion of the changed information, to a feature in the DMF the RDMF, or both.
  • a system as described above wherein the system is configured to generate at least one retrievable memory log comprising hierarchy and/ or history of detected changes between a plurality of sequential DMFs.
  • a system as described above wherein the system is configured to generate at least one retrievable memory log comprising hierarchy and/ or history of recognized features comprising changes between a plurality of sequential DMFs.
  • a system as described above wherein the system is configured to generate at least one retrievable memory log comprising details of recognized features of at least one DMF.
  • a method as described above is disclosed, wherein the analysis module is configured to perform at least one of the following instructions: (a) determine shot separation within each DMF; (b) determine the location along the timeline of each shot; (c) determine the time length of each shot; (d) determine the visual characteristics of each DMF frame; (e) determine the sound characteristics of each DMF frame; (f) determine at least one shot transitions characteristics of each DMF; (g) determine the time length of the entire DMF for each DMF; (h) match the frames, shots, or both between at least one first DMF and one second DMF; (i) determine frame correspondence between at least one first DMF frame and at least one second DMF frame; (j) determine shot correspondence between at least one first DMF shot and at least one second DMF shots; (k) determine sound characteristic correspondence between at least one first DMF and at least one second said DMF frames, shots, or both; (1) determine visual characteristics correspondence between at least one first said DMF and at least one second said DMF frames, shots, or both;
  • the analysis module further comprises at least one matching module operatively in communication with the CRM, configured to match corresponding portions of the DMFs;
  • the CRM further comprising computer executable instructions that configure one or more to perform the instructions comprising: (a) receiving at least one first and at least one second DMFs by the receiving module; (b) distinguishing between the different shots to determine shot separators by the matching module in each DMF; (c) determining shot correspondence between the shots in the first DMF and the second DMF, to determine matched shots by the matching module; (d) determining frame correspondence between the frames in the first DMF and second DMF, within the matched shots to determine matched frames by the matching module; (e) detect differences between the DMFs in level selected from a group consisting of: entire DMF level, frame level, shot level, timing level, transition level, sound level and any combination thereof, by the analysis module; and, (f) logging the detected changes and/ or detected features comprising the changes.
  • a system as described above wherein the processor is configured to graphically present at least a portion of the detected changes information visually in the RDMF;
  • a system as described above wherein the processor is configured to present the changes graphically in the RDMF at occurrences of the changed feature selected from a group consisting of: all occurrences of the changed feature, user defined occurrences of the changed feature, from the location of the change and onward along the RDMF timeline, on occurrences of the changed feature in derivative versions of the RDMF, and any combination thereof.
  • a system as described above wherein the system further comprises at least one annotation module operatively in communication with the processor, the annotation module is configured to generate at least one annotation in association with at least one DMF feature.
  • a system as described above wherein the annotation module is configured to generate one or more of the annotations in a manner selected from a group consisting of: automatically, manually, according to user defined criteria, and any combination thereof.
  • a system as described above wherein the annotation module is configured to accept one or more annotations from at least one external source and associate them with defined features in the RDMF.
  • a system as described above further comprising an annotation module configured to associate at least one annotation with at least one portion of the RDMF.
  • a system as described above wherein the annotation module is configured to generate annotations for one or more detected changes in the RDMF.
  • each generated annotation of the detected change comprises at least a portion of detected changed information.
  • a system as described above wherein the annotation module is further configured to index at least one detected changes, index at least one features comprising detected changes, or both.
  • a system as described above wherein the annotations comprising at least one selected from a group consisting of: the logged changes, the recognized features comprising changes, the changes information, indexing of the annotation content, comment(s) inserted by at least one user, link(s), images, changes history, and any combination thereof.
  • a system as described above wherein the annotation module is configured to edit at least one annotation by performing at least one selected from a group consisting of: adding annotation, deleting annotation, changing annotation details, manipulating graphical representation of the annotation, editing annotation content, editing annotation location on screen, editing annotation location along timeline, edit annotation image, connecting the annotation(s) to at least one second feature, and any combination thereof.
  • a system as described above wherein the annotation module is further configured to allow a user by means of a user interface, to edit at least one of the annotations.
  • a system as described above wherein the annotation module is configured to generate differential graphical representation of annotations relating to their version source and/ or their place in the changes hierarchy.
  • a system as described above wherein the annotation module is configured to generate a task list comprising one or more of the annotations;
  • a system as described above wherein the annotation module is configured to be updated following at least one event selected from a group consisting of: detecting a change in the DMF, editing of the annotation by at least one user, and any combination thereof.
  • a system as described above wherein the annotation module is configured to forward the task list to at least one third party;
  • a system as described above wherein the annotation module is configured to distribute a different set the annotations into different final reviewed digital media files (RDMFs) according to user defined criteria.
  • RDMFs final reviewed digital media files
  • a system as described above wherein the annotation module is configured to index the annotations according to at least one selected from a group consisting of: user defined information, the changes information, the feature comprising change, DMF version, timing within DMF, the annotation generation date, the annotation editing date, the annotation tag associated by a user, and any combination thereof.
  • a system as described above wherein the processor is configured to search annotations according to at least one annotation indexing.
  • a system as described above wherein the system is configured to generate a history database of the detected changes information in at least one DMF in comparison to at least one other DMF selected from a group consisting of: all received DMF versions, at least one second DMF, a user selected DMF, a most recently received DMF, the first received DMF, at least a portion of a selected DMF, and any combination thereof.
  • a system as described above wherein the processor is further configured to generate at least one DMF by: combining at least a portion of different DMFs, combining at least one first and at least one second portion of the same DMF, extract at least one portion from the DMF, and any combination thereof.
  • the processor is further configured to generate at least one RDMF by: combining at least a portion of different RDMFs, combining at least one first and at least one second portion of the same RDMF, extract at least one portion from the RDMF, and any combination thereof.
  • a system as described above wherein the system further comprises at least one rendering module configured to render the RDMF according to user defined criteria.
  • a system as described above wherein the receiving module is configured to accept the DMF in a rendered form.
  • a system as described above wherein the receiving module is configured to accept the DMF in a compressed form and extract the rendered form.
  • Fig. IB schematically representing in an out of scale manner an embodiment of the invention (105).
  • the system (100) of Fig. 1A further comprising at least one reviewed media file generating module (140) configured to render a file of the latest version including a representation of at least one of: one or more detected changes, one or more features comprising at least one detected change, an annotation with a comment, text image and/or link.
  • the reviewed file is associated with a metafile comprising annotation data.
  • the digital media file is associated with an add-in (plug in) to any media editing program providing access to the annotated information and optionally to a metadata file within another program.
  • the decision on how to export the data obtained by the system can be done by an export module (180).
  • the export module can optionally render the file in any of the file formats known in the art, and/ or compress the file.
  • the system (105) further comprises an annotation module (160) configured to provide an annotation to at least one change or feature comprising change. Additionally or alternatively, one or more annotations can be imported from an external source and implemented in the file. Additionally or alternatively the annotations can be edited, added to, deleted, or distributed together or separately from the resulting file.
  • the system is further configured to accept a metafile comprising at least one annotation by said receiving module.
  • the collaborative application is executed on a server, and accessed through a client application such as a browser on a computing device or hand held smart device,
  • the data received from the server based application includes at least one of: a metadata file comprising detected changes, a rendered version of the digital media file associated to a metadata file, a rendered version of the digital media file with the metadata displayed within the digital media file, a rendered version of the digital media file comprising a plug-in and or add on that enables viewing and or editing of the metadata; a rendered version of the digital media file comprising annotations and/or association to an annotation database or file, and any combination thereof.
  • annotations comprising at least one of: comments, notification of changes, indexes of changes, location of changes , detected features of detected changes, one or more characteristic in with the change is manifested, change date, change provider, link to another location in the digital media, link to an external program and/or memory, and any combination thereof.
  • a computer-readable storage medium having stored therein a computer program loadable into a processor of a communication system; the communication system comprising a communication network attached to one or more end users; wherein the computer program comprises code adapted to perform a method for reviewing a plurality of sequential Digital Media Files (DMFs); the method comprising: (a) receiving at least one first and at least one second DMFs from at least one user (210); (b) comparing the visual and/ or audio content of each of the DMFs and detect visual and/or audio changes between the DMFs by means of the analysis module (220); (c) recognizing one or more features comprising the detected changes by means of the recognition module (230); (d) logging the detected changes information and the recognition information (240); (e) generating at least one reviewed digital media file (RDMF) comprising the second DMF configured to present at least a portion the detected changes information and the
  • DMFs Digital Media Files
  • a method as described above additionally comprising the step of presenting at least a portion of the changed features hierarchy and/or history in a selected from a group consisting of: at least a portion of a user interface, in an external editing software, graphically presented in the RDMF, in a separate file, in a file configured as task list, and any combination thereof.
  • a method as described above is disclosed, additionally comprising the step of the analysis module detecting visual and/ or sound changes between the DMFs at the level of: the entire DMFs, at least a portion of the DMFs, the DMF shot level, the DMF frame level, the DMF shot and or audio track transition level, and any combination thereof.
  • a method as described above is disclosed, additionally comprising the step of the recognition module generating at least one fingerprint of a recognized feature comprising tolerances of feature characteristics enabling recognition.
  • a method as described above is disclosed, additionally comprising the step of relating at least a portion of the information of at least one detected change to all of the changed feature occurrences recognized by the recognition module in the same the DMF.
  • a method as described above is disclosed, additionally comprising the step of searching and optionally associating an annotation and/ or at least a portion of the changed information, to a feature in the DMF the RDMF, or both.
  • a method as described above is disclosed, additionally comprising the step of generating at least one retrievable memory log comprising hierarchy and/ or history of detected changes between a plurality of sequential DMFs.
  • a method as described above additionally comprising the step of generating at least one retrievable memory log comprising hierarchy and/ or history of recognized features comprising changes between a plurality of sequential DMFs.
  • a method as described above is disclosed, additionally comprising the step of generating at least one retrievable memory log comprising details of recognized features of at least one DMF.
  • a method as described above is disclosed, further wherein the step of comparing of the visual and/or audio content of each DMFs follows performing at least one of: (a) determining shot separation within each DMF; (b) determining the location along the timeline of each shot; (c) determining the time length of each shot; (d) determining the visual characteristics of each DMF frame; (e) determining the sound characteristics of each DMF frame; (f) determining at least one shot transitions characteristics of each DMF; (g) determining the time length of the entire DMF for each DMF; (h) matching the frames, shots, or both between at least one first DMF and one second DMF; (i) determining frame correspondence between at least one first DMF frame and at least one second DMF frame; (j) determining shot correspondence between at least one first DMF shot and at least one second DMF shots; (k) determining sound characteristic correspondence between at least one first DMF and at least one second said DMF frames, shots, or both; (1) determining visual characteristics correspondence
  • a method as described above wherein the step of detecting changes additionally comprises the steps of: (a) distinguishing between the different shots to determine shot separators in each DMF; (b) determining shot correspondence between the shots in the first DMF and second DMF, to determine matched shots; (c) determining frame correspondence between the frames in the first DMF and second DMF, within the matched shots to determine matched frames; and, (d) detecting differences between the DMFs in level selected from a group consisting of: entire DMF level, frame level, shot level, timing level, transition level, sound level and any combination thereof, according to the correspondence.
  • a method as described above is disclosed, additionally comprising the step of graphically presenting at least a portion of the detected changes information visually in the RDMF.
  • a method as described above additionally comprising the step of presenting the changes graphically in the RDMF at occurrences of the changed feature selected from a group consisting of: all occurrences of the changed feature, user defined occurrences of the changed feature, from the location of the change and onward along the RDMF timeline, on occurrences of the changed feature in derivative versions of the RDMF, and any combination thereof.
  • a method as described above additionally comprising the step of further providing the system with at least one annotation module operatively in communication with the processor, the annotation module is configured to generate at least one annotation in association with at least one DMF feature.
  • a method as described above is disclosed, additionally comprising the step of the annotation module generating one or more of the annotations in a manner selected from a group consisting of: automatically, manually, according to user defined criteria, and any combination thereof.
  • a method as described above is disclosed, additionally comprising the step of the annotation module accepting one or more annotations from at least one external source and associating them with defined features in the RDMF.
  • a method as described above is disclosed, additionally comprising the step of the annotation module associating at least one annotation with at least one portion of the RDMF.
  • a method as described above is disclosed, additionally comprising the step of the annotation generating annotations for one or more detected changes in the RDMF.
  • a method as described above is disclosed, additionally comprising the step of each generated annotation of the detected change comprises at least a portion of detected changed information.
  • a method as described above is disclosed, additionally comprising the step of the annotation module indexing at least one detected changes, indexing at least one features comprising detected changes, or both.
  • the annotations comprising at least one selected from a group consisting of: the logged changes, the recognized features comprising changes, the changes information, indexing of the annotation content, comment(s) inserted by at least one user, link(s), images, changes history, and any combination thereof.
  • a method as described above additionally comprising the step of: the annotation module editing at least one annotation by performing at least one selected from a group consisting of: adding annotation, deleting annotation, changing annotation details, manipulating graphical representation of the annotation, editing annotation content, editing annotation location on screen, editing annotation location along timeline, edit annotation image, connecting the annotation(s) to at least one second feature, and any combination thereof.
  • a method as described above is disclosed, additionally comprising the step of the annotation module enabling a user by means of a user interface, to edit at least one of the annotations.
  • a method as described above is disclosed, additionally comprising the step of the annotation module generating differential graphical representation of annotations relating to their version source and/ or their place in the changes hierarchy.
  • a method as described above is disclosed, additionally comprising the step of the annotation module generating a task list comprising one or more of the annotations;
  • a method as described above additionally comprising the step of the annotation module updating the task list following at least one event selected from a group consisting of: detecting a change in the DMF, editing of the annotation by at least one user, and any combination thereof.
  • a method as described above additionally comprising the step of the annotation module forwarding the task list to at least one third party;
  • a method as described above is disclosed, additionally comprising the step of the annotation module distributing a different set the annotations into different final reviewed digital media files (RDMFs) according to user defined criteria.
  • RDMFs final reviewed digital media files
  • a method as described above additionally comprising the step of the annotation module indexing the annotations according to at least one selected from a group consisting of: user defined information, the changes information, the feature comprising change, DMF version, timing within DMF, the annotation generation date, the annotation editing date, the annotation tag associated by a user, and any combination thereof.
  • a method as described above is disclosed, additionally comprising the step of the processor searching the annotations according to at least one annotation indexing.
  • a method as described above additionally comprising the step of the system generating a history database of the detected changes information in at least one DMF in comparison to at least one other DMF selected from a group consisting of: all received DMF versions, at least one second DMF, a user selected DMF, a most recently received DMF, the first received DMF, at least a portion of a selected DMF, and any combination thereof.
  • a method as described above additionally comprising the step of the processor generating at least one DMF by: combining at least a portion of different DMFs, combining at least one first and at least one second portion of the same DMF, extract at least one portion from the DMF, and any combination thereof.
  • a method as described above additionally comprising the step of the processor generating at least one RDMF by: combining at least a portion of different RDMFs, combining at least one first and at least one second portion of the same RDMF, extract at least one portion from the RDMF, and any combination thereof.
  • a method as described above additionally comprising the step of providing the system further comprising at least one rendering module configured to render the RDMF according to user defined criteria.
  • a method as described above is disclosed, additionally comprising the step of the receiving module accepting the DMF in a rendered form.
  • a method as described above is disclosed, additionally comprising the step of the receiving module accepting the DMF in a compressed form and extracting the rendered form.
  • a computer implemented method for reviewing of Digital Media Files (DMF) received from at least one user-end (365, 360), via a communication system (370), to at least one server computer (310) comprising the steps of: (a) providing: (i) the server comprising at least one memory storage (325) operatively coupled with at least one processor (320); (ii) at least one receiving module (330) operatively in communication with the server (310) configured to receive first and at least one second DMFs from at least one user; (iii) at least one analysis module (340) operatively in communication with the server configured to detect visual and/or audio changes between the DMFs; and, (iv) at least one recognition module (350) operatively in communication with the server configured to recognize features comprising changes between the DMFs; (b) receiving at least one first and at least one second DMFs by means of the receiving module from at least one
  • DMF Digital Media Files
  • a method as described above additionally comprising the step of presenting at least a portion of the changed features hierarchy and/or history in a selected from a group consisting of: at least a portion of a user interface, in an external editing software, graphically presented in the RDMF, in a separate file, in a file configured as task list, and any combination thereof.
  • a method as described above is disclosed, additionally comprising the step of the analysis module detecting visual and/ or sound changes between the DMFs at the level of: the entire DMFs, at least a portion of the DMFs, the DMF shot level, the DMF frame level, the DMF shot and or audio track transition level, and any combination thereof.
  • a method as described above is disclosed, additionally comprising the step of the recognition module generating at least one fingerprint of a recognized feature comprising tolerances of feature characteristics enabling recognition.
  • a method as described above is disclosed, additionally comprising the step of relating at least a portion of the information of at least one detected change to all of the changed feature occurrences recognized by the recognition module in the same the DMF.
  • a method as described above is disclosed, additionally comprising the step of searching and optionally associating an annotation and/ or at least a portion of the changed information, to a feature in the DMF the RDMF, or both.
  • a method as described above is disclosed, additionally comprising the step of generating at least one retrievable memory log comprising hierarchy and/ or history of detected changes between a plurality of sequential DMFs.
  • a method as described above is disclosed, additionally comprising the step of generating at least one retrievable memory log comprising hierarchy and/ or history of recognized features comprising changes between a plurality of sequential DMFs.
  • a method as described above is disclosed, additionally comprising the step of generating at least one retrievable memory log comprising details of recognized features of at least one DMF.
  • a method as described above additionally comprising the step of the analysis module further performing at least one of the following instructions: (a) determining shot separation within each DMF; (b) determining the location along the timeline of each shot; (c) determining the time length of each shot; (d) determining the visual characteristics of each DMF frame; (e) determining the sound characteristics of each DMF frame; (f) determining at least one shot transitions characteristics of each DMF; (g) determining the time length of the entire DMF for each DMF; (h) matching the frames, shots, or both between at least one first DMF and one second DMF; (i) determining frame correspondence between at least one first DMF frame and at least one second DMF frame; (j) determining shot correspondence between at least one first DMF shot and at least one second DMF shots; (k) determining sound characteristic correspondence between at least one first DMF and at least one second said DMF frames, shots, or both; (1) determining visual characteristics correspondence between at least one first said DMF and
  • the step of detecting changes by the analysis module additionally comprising the steps of: further providing at least one matching module, operatively in communication with the server, configured to match corresponding portions of the DMFs; further wherein the server to perform the instructions comprising: (a) receive at least one first and at least one second DMFs by the receiving module; (b) distinguish between the different shots to determine shot separators by the matching module in each DMF ; (c) determine shot correspondence between the shots in the first DMF and second DMF, to determine matched shots by the matching module; (d) determine frame correspondence between the frames in the first DMF and second DMF, within the matched shots to determine matched frames by the matching module; (e) detect differences between the DMFs in level selected from a group consisting of: entire DMF level, frame level, shot level, timing level, transition level, sound level and any combination thereof, by the analysis module; and, (f) log the detected changes and/ or detected features comprising the changes.
  • a method as described above is disclosed, additionally comprising the step of graphically presenting at least a portion of the detected changes information visually in the RDMF.
  • a method as described above additionally comprising the step of presenting the changes graphically in the RDMF at occurrences of the changed feature selected from a group consisting of: all occurrences of the changed feature, user defined occurrences of the changed feature, from the location of the change and onward along the RDMF timeline, on occurrences of the changed feature in derivative versions of the RDMF, and any combination thereof.
  • a method as described above additionally comprising the step of further providing the system with at least one annotation module operatively in communication with the processor, the annotation module is configured to generate at least one annotation in association with at least one DMF feature.
  • a method as described above is disclosed, additionally comprising the step of the annotation module generating one or more of the annotations in a manner selected from a group consisting of: automatically, manually, according to user defined criteria, and any combination thereof.
  • a method as described above is disclosed, additionally comprising the step of the annotation module accepting one or more annotations from at least one external source and associating them with defined features in the RDMF.
  • a method as described above is disclosed, additionally comprising the step of the annotation module associating at least one annotation with at least one portion of the RDMF.
  • a method as described above is disclosed, additionally comprising the step of the annotation generating annotations for one or more detected changes in the RDMF.
  • a method as described above additionally comprising the step of each generated annotation of the detected change comprises at least a portion of detected changed information.
  • a method as described above is disclosed, additionally comprising the step of the annotation module indexing at least one detected changes, indexing at least one features comprising detected changes, or both.
  • the annotations comprising at least one selected from a group consisting of: the logged changes, the recognized features comprising changes, the changes information, indexing of the annotation content, comment(s) inserted by at least one user, link(s), images, changes history, and any combination thereof.
  • a method as described above additionally comprising the step of: the annotation module editing at least one annotation by performing at least one selected from a group consisting of: adding annotation, deleting annotation, changing annotation details, manipulating graphical representation of the annotation, editing annotation content, editing annotation location on screen, editing annotation location along timeline, edit annotation image, connecting the annotation(s) to at least one second feature, and any combination thereof.
  • a method as described above is disclosed, additionally comprising the step of the annotation module enabling a user by means of a user interface, to edit at least one of the annotations.
  • a method as described above is disclosed, additionally comprising the step of the annotation module generating differential graphical representation of annotations relating to their version source and/ or their place in the changes hierarchy.
  • a method as described above is disclosed, additionally comprising the step of the annotation module generating a task list comprising one or more of the annotations;
  • a method as described above additionally comprising the step of the annotation module updating the task list following at least one event selected from a group consisting of: detecting a change in the DMF, editing of the annotation by at least one user, and any combination thereof.
  • a method as described above is disclosed, additionally comprising the step of the annotation module forwarding the task list to at least one third party;
  • a method as described above is disclosed, additionally comprising the step of the annotation module distributing a different set the annotations into different final reviewed digital media files (RDMFs) according to user defined criteria.
  • RDMFs final reviewed digital media files
  • a method as described above additionally comprising the step of the annotation module indexing the annotations according to at least one selected from a group consisting of: user defined information, the changes information, the feature comprising change, DMF version, timing within DMF, the annotation generation date, the annotation editing date, the annotation tag associated by a user, and any combination thereof
  • a method as described above is disclosed, additionally comprising the step of the processor searching the annotations according to at least one annotation indexing.
  • a method as described above additionally comprising the step of the system generating a history database of the detected changes information in at least one DMF in comparison to at least one other DMF selected from a group consisting of: all received DMF versions, at least one second DMF, a user selected DMF, a most recently received DMF, the first received DMF, at least a portion of a selected DMF, and any combination thereof.
  • a method as described above additionally comprising the step of the processor generating at least one DMF by: combining at least a portion of different DMFs, combining at least one first and at least one second portion of the same DMF, extract at least one portion from the DMF, and any combination thereof.
  • a method as described above additionally comprising the step of the processor generating at least one RDMF by: combining at least a portion of different RDMFs, combining at least one first and at least one second portion of the same RDMF, extract at least one portion from the RDMF, and any combination thereof.
  • a method as described above additionally comprising the step of providing the system further comprising at least one rendering module configured to render the RDMF according to user defined criteria.
  • a method as described above is disclosed, additionally comprising the step of the receiving module accepting the DMF in a rendered form.
  • a method as described above is disclosed, additionally comprising the step of the receiving module accepting the DMF in a compressed form and extracting the rendered form.
  • Fig. 4 schematically representing in an out of scale manner an embodiment of the invention.
  • DMFs Digital Media Files
  • Each of the DMFs comprising one or more shots, each shot comprising one or more frames.
  • Fig 5 represents an example of an embodiment of a system for implementing the method of Fig 4.
  • the method characterized by the steps of: (a) providing: (i) at least one receiving module (Fig. 5, 510) configured to receive first and at least one second DMFs; (ii) at least one matching module (Fig. 5, 540) configured to match corresponding portions of the DMFs; (iii) at least one analysis module (Fig.
  • a non- transitory computer readable storage medium (Fig. 5, 510), operatively in communication with the receiving module (Fig. 5, 510), the analysis module (Fig. 5, 520), the matching module (Fig. 5, 540), annotation module (Fig. 5, 550) and the recognition module (Fig. 5, 530), the CRM (Fig. 5, 505) having computer executable instructions that configure one or more operatively coupled processors to perform the instructions comprising: (1) receive (Fig. 5, 530), operatively in communication with the receiving module (Fig. 5, 510), the analysis module (Fig. 5, 520), the matching module (Fig. 5, 540), annotation module (Fig. 5, 550) and the recognition module (Fig. 5, 530), the CRM (Fig. 5, 505) having computer executable instructions that configure one or more operatively coupled processors to perform the instructions comprising: (1) receive (Fig.
  • Fig. 4, 460 one or more features associated with a detected difference, by the recognition module; (7) log the detected changes (Fig. 4, 470) and/ or detected features comprising the changes; (8) generate (Fig. 4, 480) at least one reviewed digital media file (RDMF) comprising the second DMF configured to present at least a portion of the detected changes information and the feature recognition of in comparison to at least one first DMF; and, (9) annotate (Fig. 4, 490) each detected change by the annotation module; further wherein the annotation of detected change feature is transferred to all of the changed feature derivatives in the RDMF and in sequential derivative RDMFs.
  • RDMF reviewed digital media file
  • a method as described above wherein the correspondence of a shot and /or a frame comprises correspondence of at least one characteristic selected from a group consisting of: sound characteristics, visual characteristics, timing characteristics, transition characteristics, textual characteristics, 3D object characteristics, 2D object characteristics, special effects characteristics, and any combination thereof.
  • a method as described above is disclosed, additionally comprising the step of determining area correspondence within matched frames.
  • a method as described above is disclosed, additionally comprising the step of presenting at least a portion of the changed features hierarchy and/or history in a selected from a group consisting of: at least a portion of a user interface, in an external editing software, graphically presented in the RDMF, in a separate file, in a file configured as task list, and any combination thereof.
  • a method as described above is disclosed, additionally comprising the step of the analysis module detecting visual and/ or sound changes between the DMFs at the level of: the entire DMFs, at least a portion of the DMFs, the DMF shot level, the DMF frame level, the DMF shot and or audio track transition level, and any combination thereof.
  • a method as described above is disclosed, additionally comprising the step of the recognition module generating at least one fingerprint of a recognized feature comprising tolerances of feature characteristics enabling recognition.
  • a method as described above is disclosed, additionally comprising the step of relating at least a portion of the information of at least one detected change to all of the changed feature occurrences recognized by the recognition module in the same the DMF.
  • a method as described above is disclosed, additionally comprising the step of searching and optionally associating an annotation and/ or at least a portion of the changed information, to a feature in the DMF the RDMF, or both.
  • a method as described above is disclosed, additionally comprising the step of generating at least one retrievable memory log comprising hierarchy and/ or history of detected changes between pluralities of sequential DMFs.
  • a method as described above is disclosed, additionally comprising the step of generating at least one retrievable memory log comprising hierarchy and/ or history of recognized features comprising changes between pluralities of sequential DMFs.
  • a method as described above is disclosed, additionally comprising the step of generating at least one retrievable memory log comprising details of recognized features of at least one DMF.
  • a method as described above additionally comprising the step of the analysis module performing analysis of the visual and/or audio content of each DMFs by performing at least one of the following instructions: (a) determining shot separation within each DMF; (b) determining the location along the timeline of each shot; (c) determining the time length of each shot; (d) determining the visual characteristics of each DMF frame; (e) determining the sound characteristics of each DMF frame; (f) determining at least one shot transitions characteristics of each DMF; (g) determining the time length of the entire DMF for each DMF; (h) matching the frames, shots, or both between at least one first DMF and one second DMF; (i) determining frame correspondence between at least one first DMF frame and at least one second DMF frame; (j) determining shot correspondence between at least one first DMF shot and at least one second DMF shots; (k) determining sound characteristic correspondence between at least one first DMF and at least one second said DMF frames, shots, or both;
  • a method as described above is disclosed, additionally comprising the step of graphically presenting at least a portion of the detected changes information visually in the RDMF;
  • a method as described above additionally comprising the step of presenting the changes graphically in the RDMF at occurrences of the changed feature selected from a group consisting of: all occurrences of the changed feature, user defined occurrences of the changed feature, from the location of the change and onward along the RDMF timeline, on occurrences of the changed feature in derivative versions of the RDMF, and any combination thereof.
  • a method as described above additionally comprising the step of the annotation module generating at least one annotation in association with at least one DMF feature.
  • a method as described above is disclosed, additionally comprising the step of the annotation module generating one or more of the annotations in a manner selected from a group consisting of: automatically, manually, according to user defined criteria, and any combination thereof.
  • a method as described above is disclosed, additionally comprising the step of the annotation module accepting one or more annotations from at least one external source and associating them with defined features in the RDMF.
  • a method as described above is disclosed, additionally comprising the step of the annotation module associating at least one annotation with at least one portion of the RDMF.
  • a method as described above is disclosed, additionally comprising the step of the annotation generating annotations for one or more detected changes in the RDMF.
  • a method as described above additionally comprising the step of each generated annotation of the detected change comprises at least a portion of detected changed information.
  • a method as described above is disclosed, additionally comprising the step of the annotation module indexing at least one detected changes, indexing at least one features comprising detected changes, or both.
  • the annotations comprising at least one selected from a group consisting of: the logged changes, the recognized features comprising changes, the changes information, indexing of the annotation content, comment(s) inserted by at least one user, link(s), images, changes history, and any combination thereof.
  • a method as described above additionally comprising the step of: the annotation module editing at least one annotation by performing at least one selected from a group consisting of: adding annotation, deleting annotation, changing annotation details, manipulating graphical representation of the annotation, editing annotation content, editing annotation location on screen, editing annotation location along timeline, edit annotation image, connecting the annotation(s) to at least one second feature, and any combination thereof.
  • a method as described above is disclosed, additionally comprising the step of the annotation module enabling a user by means of a user interface, to edit at least one of the annotations.
  • a method as described above is disclosed, additionally comprising the step of the annotation module generating differential graphical representation of annotations relating to their version source and/ or their place in the changes hierarchy.
  • a method as described above is disclosed, additionally comprising the step of the annotation module generating a task list comprising one or more of the annotations;
  • a method as described above additionally comprising the step of the annotation module updating the task list following at least one event selected from a group consisting of: detecting a change in the DMF, editing of the annotation by at least one user, and any combination thereof.
  • a method as described above is disclosed, additionally comprising the step of the annotation module forwarding the task list to at least one third party;
  • a method as described above is disclosed, additionally comprising the step of the annotation module distributing a different set the annotations into different final reviewed digital media files (RDMFs) according to user defined criteria.
  • RDMFs final reviewed digital media files
  • a method as described above additionally comprising the step of the annotation module indexing the annotations according to at least one selected from a group consisting of: user defined information, the changes information, the feature comprising change, DMF version, timing within DMF, the annotation generation date, the annotation editing date, the annotation tag associated by a user, and any combination thereof.
  • a method as described above is disclosed, additionally comprising the step of the processor searching the annotations according to at least one annotation indexing.
  • a method as described above additionally comprising the step of the system generating a history database of the detected changes information in at least one DMF in comparison to at least one other DMF selected from a group consisting of: all received DMF versions, at least one second DMF, a user selected DMF, a most recently received DMF, the first received DMF, at least a portion of a selected DMF, and any combination thereof.
  • a method as described above additionally comprising the step of the processor generating at least one DMF by: combining at least a portion of different DMFs, combining at least one first and at least one second portion of the same DMF, extract at least one portion from the DMF, and any combination thereof.
  • a method as described above additionally comprising the step of the processor generating at least one RDMF by: combining at least a portion of different RDMFs, combining at least one first and at least one second portion of the same RDMF, extract at least one portion from the RDMF, and any combination thereof.
  • a method as described above additionally comprising the step of providing the system further comprising at least one rendering module configured to render the RDMF according to user defined criteria.
  • a method as described above is disclosed, additionally comprising the step of the receiving module accepting the DMF in a rendered form.
  • a method as described above is disclosed, additionally comprising the step of the receiving module accepting the DMF in a compressed form and extracting the rendered form.
  • Fig. 5 schematically representing in an out of scale manner an embodiment of the invention.
  • a detection and annotation system useful for detecting and annotating changes between two sequential versions of Digital Media Files (DMFs), the DMFs each comprising one or more shots, each shot comprising one or more frames
  • the system comprises: (a) at least one receiving module (510) operatively in communication with the server configured to receive first and at least one second DMFs; (b) at least one matching module (540) configured to match corresponding portions of the DMFs; (c) at least one analysis module (520) configured to detect visual and/or audio changes between the DMFs; (d) at least one recognition module (530) configured to recognize features comprising changes between the DMFs; (e) at least one annotation module (550) configured to generate at least one annotation; and, (f) a non-transitory computer readable storage medium (CRM) (505) in communication with the receiving module (510), matching module (540), the analysis module (520), the recognition module (530), and the annotation module (550); the CRM, operatively coupled to at least
  • a system as described above wherein the correspondence of a shot and /or a frame comprises correspondence of at least one characteristic selected from a group consisting of: sound characteristics, visual characteristics, timing characteristics, transition characteristics, textual characteristics, 3D object characteristics, 2D object characteristics, special effects characteristics, and any combination thereof.
  • a system as described above is disclosed, additionally comprising instruction of determine area correspondence within matched frames.
  • a system as described above wherein the system is configured to present at least a portion of the changed features hierarchy and/or history in a selected from a group consisting of: at least a portion of a user interface, in an external editing software, graphically presented in the RDMF, in a separate file, in a file configured as task list, and any combination thereof.
  • a system as described above wherein the analysis module is configured to detect visual and/ or sound changes between the DMFs at the level of: the entire DMFs, at least a portion of the DMFs, the DMF shot level, the DMF frame level, the DMF shot and or audio track transition level, and any combination thereof.
  • a system as described above wherein the recognition module is configured to generate at least one fingerprint of a recognized feature comprising tolerances of feature characteristics enabling recognition.
  • a system as described above wherein the processor is configured to relate at least a portion of the information of at least one detected change to all of the changed feature occurrences recognized by the recognition module in the same the DMF.
  • a system as described above wherein the processor is configured to search and optionally associate an annotation and/ or at least a portion of the changed information, to a feature in the DMF the RDMF, or both.
  • a system as described above wherein the system is configured to generate at least one retrievable memory log comprising hierarchy and/ or history of detected changes between a plurality of sequential DMFs.
  • a system as described above wherein the system is configured to generate at least one retrievable memory log comprising hierarchy and/ or history of recognized features comprising changes between a plurality of sequential DMFs.
  • a system as described above wherein the system is configured to generate at least one retrievable memory log comprising details of recognized features of at least one DMF.
  • a system as described above wherein the analysis module is configured to analyze the visual and/or audio content of each of the DMFs by performing at least one of the following instructions: (a) determine shot separation within the DMF; (a) determine shot separation within each DMF; (b) determine the location along the timeline of each shot; (c) determine the time length of each shot; (d) determine the visual characteristics of each DMF frame; (e) determine the sound characteristics of each DMF frame; (f) determine at least one shot transitions characteristics of each DMF; (g) determine the time length of the entire DMF for each DMF; (h) matching the frames, shots, or both between at least one first DMF and one second DMF; (i) determine frame correspondence between at least one first DMF frame and at least one second DMF frame; (j) determine shot correspondence between at least one first DMF shot and at least one second DMF shots; (k) determine sound characteristic correspondence between at least one first DMF and at least one second said DMF frames, shots, or both;
  • a system as described above wherein the processor is configured to graphically present at least a portion of the detected changes information visually in the RDMF;
  • a system as described above wherein the processor is configured to present the changes graphically in the RDMF at occurrences of the changed feature selected from a group consisting of: all occurrences of the changed feature, user defined occurrences of the changed feature, from the location of the change and onward along the RDMF timeline, on occurrences of the changed feature in derivative versions of the RDMF, and any combination thereof.
  • a system as described above wherein the annotation module is configured to generate at least one annotation in association with at least one DMF feature.
  • a system as described above wherein the annotation module is configured to generate one or more of the annotations in a manner selected from a group consisting of: automatically, manually, according to user defined criteria, and any combination thereof.
  • a system as described above wherein the annotation module is configured to accept one or more annotations from at least one external source and associate them with defined features in the RDMF.
  • a system as described above further comprising an annotation module configured to associate at least one annotation with at least one portion of the RDMF.
  • a system as described above wherein the annotation module is configured to generate annotations for one or more detected changes in the RDMF.
  • each generated annotation of the detected change comprises at least a portion of detected changed information.
  • a system as described above wherein the annotation module is further configured to index at least one detected changes, index at least one features comprising detected changes, or both.
  • a system as described above wherein the annotations comprising at least one selected from a group consisting of: the logged changes, the recognized features comprising changes, the changes information, indexing of the annotation content, comment(s) inserted by at least one user, link(s), images, changes history, and any combination thereof.
  • a system as described above wherein the annotation module is configured to edit at least one annotation by performing at least one selected from a group consisting of: adding annotation, deleting annotation, changing annotation details, manipulating graphical representation of the annotation, editing annotation content, editing annotation location on screen, editing annotation location along timeline, edit annotation image, connecting the annotation(s) to at least one second feature, and any combination thereof.
  • a system as described above wherein the annotation module is further configured to allow a user by means of a user interface, to edit at least one of the annotations.
  • a system as described above wherein the annotation module is configured to generate differential graphical representation of annotations relating to their version source and/ or their place in the changes hierarchy.
  • a system as described above wherein the annotation module is configured to generate a task list comprising one or more of the annotations;
  • a system as described above wherein the annotation module is configured to be updated following at least one event selected from a group consisting of: detecting a change in the DMF, editing of the annotation by at least one user, and any combination thereof.
  • a system as described above wherein the annotation module is configured to forward the task list to at least one third party;
  • a system as described above wherein the annotation module is configured to distribute a different set the annotations into different final reviewed digital media files (RDMFs) according to user defined criteria.
  • RDMFs final reviewed digital media files
  • a system as described above wherein the annotation module is configured to index the annotations according to at least one selected from a group consisting of: user defined information, the changes information, the feature comprising change, DMF version, timing within DMF, the annotation generation date, the annotation editing date, the annotation tag associated by a user, and any combination thereof.
  • a system as described above wherein the processor is configured to search annotations according to at least one annotation indexing.
  • a system as described above wherein the system is configured to generate a history database of the detected changes information in at least one DMF in comparison to at least one other DMF selected from a group consisting of: all received DMF versions, at least one second DMF, a user selected DMF, a most recently received DMF, the first received DMF, at least a portion of a selected DMF, and any combination thereof.
  • a system as described above wherein the processor is further configured to generate at least one DMF by: combining at least a portion of different DMFs, combining at least one first and at least one second portion of the same DMF, extract at least one portion from the DMF, and any combination thereof.
  • a system as described above wherein the processor is further configured to generate at least one RDMF by: combining at least a portion of different RDMFs, combining at least one first and at least one second portion of the same RDMF, extract at least one portion from the RDMF, and any combination thereof.
  • a system as described above wherein the system further comprises at least one rendering module configured to render the RDMF according to user defined criteria.
  • the receiving module is configured to accept the DMF in a rendered form.
  • the receiving module is configured to accept the DMF in a compressed form and extract the rendered form.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Television Signal Processing For Recording (AREA)

Abstract

The present invention provides a system, useful for reviewing a plurality of sequential Digital Media Files, (DMF), comprising: a receiving module,; an analysis module; a recognition module; and, a non-transitory computer readable storage medium (CRM) having computer executable instructions comprising: receive at least one first and at least one second DMFs; compare the visual and/ or audio content of each of said DMF and detect visual and/or audio changes between said DMFs; recognize features comprising said detected changes by means of said recognition module; log said detected changes information and said recognized features comprising changes information; generate at least one reviewed digital media file (RDMF); and, forward said at least one RDMF to at least one recipient.

Description

A DIGITAL MEDIA REVIEWING SYSTEM AND METHODS THEREOF
FIELD OF THE INVENTION
The present invention generally relates to the field of editing, producing and reviewing digital media, preferably digital video by providing means and methods for facilitating managing, reviewing and integrating a plurality of digital media files provided by collaborating parties.
BACKGROUND OF THE INVENTION
The film and clip production industry has been rising in recent years especially in light of the advancement in digital media transferring capabilities and internet activity. The relative ease of transference of clips and video online is supported by a plurality of video editing software solutions such as Premiere® provided by Adobe Systems cooperated, and the like.
Typically, video and audio data are first captured to hard disk-based systems, or other digital storage devices. The data are either directed to disk recording or are imported from another source (transcoding, digitizing, and transferring). Once imported, the source material can be edited on a computer using any of a wide range of video editing software.
The process of film or clip production usually includes editing and reviewing in what is generally named the post-production process. Post-production is the digital phase of a video production, and received its name from the tradition of beginning this new workflow at the end of the feature film's principal photography. Today the 'post production' stage can comprise an entire film production or be performed at any time during the production process, or even before shooting begins.
The process of media creation is frequently a repetitive process, which encompasses a large number of participants, including several levels of animators, graphic designers, video FX editors, director, producer, art director, script writers, audio editors, agency (for commercials) etc. The process revolves around the daily snapshot of the resulting media. These snapshots are then presented for internal and external review by the directors, managers and stakeholders. The instructions and comments received are then processed by the visual artists toward the next day composition. The process continues with such daily iterations until the approval of the final result.
There are several shortcomings to the current process of collaborative video editing. It is very difficult to coordinate the different aspects of the editing process and reviews having been performed by different experts in different times. Many of the experts working on the film reside in different geographical locations. In addition, these inputs may be manifested in different portions of the film. Consequently, the daily versions changes are not easy identified and the tasks are not well managed. Comments made by a director, for example, need to be applied and orchestrated to multiple users working concurrently on the project and, in many cases, using multiple specialized systems. Presently, there is no simple solution providing division of tasks between the different groups and intuitive interface to manage these tasks, nor are means for continuous review of (re)generated files.
Another obstacle is sharing and distributing the film media file between different people, in many cases from different companies or subcontractors, often using different operative systems and editing programs. Source files are complicated to transfer due to band width limitation. Moreover, in many cases the participating parties wish to limit the exposure of their source material or internal artifacts, and to share only the final result.
While many of the version control systems, and some of the digital asset management systems (DAM) can store also binary data, the amount of services provided for such files, such as comparison, patching, and shelving, is highly limited. Moreover, the support even decreases in the case of very large media files, since the existing text-based difference algorithms are not suitable for these tasks.
Many examples of collaborative video editing are known in the art, all fail to provide an economic straightforward and complete solution to answer the limitations stated above. Some of these examples are:
US 20020113803 Al, titled: COLLABORATIVE COMPUTER-BASED PRODUCTION SYSTEM INCLUDING ANNOTATION, VERSIONING AND REMOTE INTERACTION, filed: Aug 13, 2001, discloses a system providing a user interface to annotate different items in a media production system such as in a digital non-linear post production system. Parts of the production, such as clips, frames and layers that have an associated annotation are provided with a visual annotation marker. Annotations can be text, freehand drawing, audio, or other. Annotations can be automatically generated. Annotations can be compiled into records, searched and transferred. A state of an application program can be stored and transferred to a remote system. The remote system attempts to recreate the original state of the application program. If the remote system is unable to do so, an image of the state of the application program is obtained, instead. This innovation fails to disclose location and recognition of changes between versions, nor does it provide the means to edit and review using separate systems. The invention recreates a state of the application program in its original state and does not perform comparisons between versions and create supportive metadata.
US2014208220A, titled: SYSTEM AND METHOD FOR CONTEXTUAL AND COLLABORATIVE KNOWLEDGE GENERATION AND MANAGEMENT THROUGH AN INTEGRATED ONLINE-OFFLINE WORKSPACE, filed Mar 1, 2012, discloses a computer-implemented online-offline workspace and method for creating, developing, storing, and managing digital content within a contextual and shared knowledge network. The invention includes a central service facility that provides an online platform for the users to work in a context-based and shared knowledge environment through a user interface on a wide range of user access devices. The online platform is embedded with a plurality of applications to allow the user to capture, create, develop, store, process, share, distribute, retrieve, reuse, and manage digital contents containing any one or a combination of the following: text, graphics, audio, video, whole or portions of web-pages and web-links. The invention further includes an end-user facility providing an offline platform that gets synchronized with the online platform upon detection of a secured communication network. This disclosure does not provide comparison tools between versions, nor does it disclose annotating specific features of an image and follow its changed parameters throughout the frames in which the appear. Further the system is based on multiple users working either online in an online editing specific application, or offline and updating online, rather than allowing every user to each operate an independent software tool and locating and recognizing the differences between last versions.
US2014289645A, titled: TRACKING CHANGES IN COLLABORATIVE AUTHORING ENVIRONMENT, filed: Mar 20, 2013, discloses change tracking and collaborative communication for authoring content in a collaborative environment. Monitored changes, comments, and similar input by the collaborating authors may be presented on demand or automatically to each author based on changes and/or comments that affect a particular author, that author's portion of collaborated content, type of changes/comments, or similar criteria. Change and/or comments notification may be provided in a complementary user interface of the collaborative authoring application or through a separate communication application such as email or text messaging. However, this disclosure comprises receiving one or more of an edit and a comment associated with a collaboratively created content, by different authors having access to a currently edited file, and does not preform analysis to detect the changes between sequential versions once finalized.
A method to be executed at least in part in a computing device for tracking changes in a collaborative authoring environment, the method comprising: receiving one or more of an edit and a comment associated with a collaboratively created content; receiving a request for viewing changes associated with the collaboratively created content; displaying a summary of edits and comments associated with the collaboratively created content; in response to selection of one of the edits and comments, displaying details of the selected one of the edits and comments; and enabling an author or viewing the selected one of the edits and comments to communicate with a co-author responsible for the selected one of the edits and comments.
Although this invention includes tracking changes on a video it does not disclose any specific means of doing so. This invention does not disclose automatically locating and recognizing changes between features of a frame or in a sequence of video frames. In addition this invention does not disclose means of detecting changes following comparison of source material to a derivative version or between rendered versions.
US 20140115476 Al, titled: WEB-BASED SYSTEM FOR DIGITAL VIDEOS, discloses systems and methods for adding and displaying interactive annotations for existing online hosted videos. A graphical annotation interface allows the creation of annotations and association of the annotations with a video. Annotations may be of different types and have different functionality, such as altering the appearance and/or behavior of an existing video, e.g. by supplementing it with text, allowing linking to other videos or web pages, or pausing playback of the video. Authentication of a user desiring to perform annotation of a video may be performed in various manners, such as by checking a uniform resource locator (URL) against an existing list, checking a user identifier against an access list, and the like. This disclosure does not include tracking changes capabilities of videos, nor does it disclose locating and recognizing changes. Further the discloser does not include following a specific feature throughout the film only of the image in relation to the duration of the annotation, as specified by the user.
US 8826117 Bl, titled: WEB-BASED SYSTEM FOR VIDEO EDITING, filed Mar 25, 2009; discloses web-based systems and methods for editing digital videos. A graphical editing interface allows designating one or more videos to assemble into a video compilation. The graphical editing interface additionally allows the association of annotations— specifying, for example, slides, people, and highlights— with portions of the video. The associated annotations alter the appearance of the video compilation when it is played, such as by displaying slides, or text associated with the annotations, along with the video at times associated with the annotations. The associated annotations also enhance the interactivity of the video compilation, such as by allowing playback to begin at points of interest, such as portions of the video for which there is an associated annotation.
US 20130145269 Al, titled: MULTI-MODAL COLLABORATIVE WEB-BASED VIDEO ANNOTATION SYSTEM, filed: Sep 26, 2012, expressly discloses a video annotation interface that includes a video pane configured to display a video, a video timeline bar including a video play-head indicating a current point of the video which is being played, a segment timeline bar including initial and final handles configured to define a segment of the video for playing, and a plurality of color-coded comment markers displayed in connection with the video timeline bar. Each of the comment markers is associated with a frame or segment of the video and corresponds to one or more annotations for that frame or segment made by one of a plurality of users. Each of the users can make annotations and view annotations made by other users. The annotations can include annotations corresponding to a plurality of modalities, including text, drawing, video, and audio modalities.
However, none of the prior art documents provide simplifying the post production stage by solving the problem of sharing only the output files, and preserving a hierarchy of version changes while maintaining a simplicity in file transfer and management of annotations and tasks. Therefore, there is a long felt need for a system and methods providing solutions to the above mentioned shortcomings. SUMMARY OF THE INVENTION
The present invention provides a system, useful for reviewing a plurality of sequential Digital Media Files, (DMF), comprising: (a) a receiving module, configured to receive at least one first DMF and at least one second DMF; (b) an analysis module configured to detect changes between at least one first DMF and one second DMF; (c) a recognition module configured to recognize at least one feature comprising one or more the detected changes along the DMF; and, (d) a non- transitory computer readable storage medium (CRM) operatively in communication with the receiving module, the analysis module and the recognition module, the CRM having computer executable instructions that configure one or more operatively coupled processors to perform the instructions comprising: (i) receive at least one first and at least one second DMFs by means of the receiving module; (ii) compare the visual and/ or audio content of each of the DMF and detect visual and/or audio changes between the DMFs by means of the analysis module; (iii) recognize features comprising the detected changes by means of the recognition module; (iv) log the detected changes information and the recognized features comprising changes information; (v) generate at least one reviewed digital media file (RDMF) of the second DMF configured to present at least a portion the detected changes and/or changed feature recognition in comparison to at least one first DMF; and, (iv)forward the at least one RDMF to at least one recipient; wherein the system is configured to generate at least one RDMF for every DMF version received in comparison to at least one previous RDMF; further wherein the system is configured to generate at least one file associated with the RDMF comprising sequential hierarchy and/ or history of detected changes and/ or the recognized features between different sequential DMFs.
It is another object of the present invention to disclose the system as defined in any of the above, wherein the system is configured to present at least a portion of the changed features hierarchy and/or history in a selected from a group consisting of: at least a portion of a user interface, in an external editing software, graphically presented in the RDMF, in a separate file, in a file configured as task list, and any combination thereof.
It is another object of the present invention to disclose the system as defined in any of the above, wherein the analysis module is configured to detect visual and/ or sound changes between the DMFs at the level of: the entire DMFs, at least a portion of the DMFs, the DMF shot level, the DMF frame level, the DMF shot and or audio track transition level, and any combination thereof. It is another object of the present invention to disclose the system as defined in any of the above, wherein the recognition module is configured to generate at least one fingerprint of a recognized feature comprising tolerances of feature characteristics enabling recognition.
It is another object of the present invention to disclose the system as defined in any of the above, wherein the processor is configured to relate at least a portion of the information of at least one detected change to all of the changed feature occurrences recognized by the recognition module in the same the DMF.
It is another object of the present invention to disclose the system as defined in any of the above, wherein the processor is configured to search and optionally associate an annotation and/ or at least a portion of the changed information, to a feature in the DMF the RDMF, or both.
It is another object of the present invention to disclose the system as defined in any of the above, wherein the system is configured to generate at least one retrievable memory log comprising hierarchy and/ or history of detected changes between a plurality of sequential DMFs.
It is another object of the present invention to disclose the system as defined in any of the above, wherein the system is configured to generate at least one retrievable memory log comprising hierarchy and/ or history of recognized features comprising changes between a plurality of sequential DMFs.
It is another object of the present invention to disclose the system as defined in any of the above, wherein the system is configured to generate at least one retrievable memory log comprising details of recognized features of at least one DMF. It is another object of the present invention to disclose the system as defined in any of the above, wherein the analysis module is configured to perform at least one of the following instructions: (a) determine shot separation within each DMF; (b) determine the location along the timeline of each shot; (c) determine the time length of each shot; (d) determine the visual characteristics of each DMF frame; (e) determine the sound characteristics of each DMF frame; (f) determine at least one shot transitions characteristics of each DMF; (g) determine the time length of the entire DMF for each DMF; (h) match the frames, shots, or both between at least one first DMF and one second DMF; (i) determine frame correspondence between at least one first DMF frame and at least one second DMF frame; (j) determine shot correspondence between at least one first DMF shot and at least one second DMF shots; (k) determine sound characteristic correspondence between at least one first DMF and at least one second said DMF frames, shots, or both; (1) determine visual characteristics correspondence between at least one first said DMF and at least one second said DMF frames, shots, or both; (m) determine shot transition characteristics correspondence between at least one first DMF and at least one second DMF; (n) determine timing characteristics correspondence between at least one first DMF and at least one second DMF; and, (o) any combination thereof.
It is another object of the present invention to disclose the system as defined in any of the above, wherein the analysis module further comprises at least one matching module operatively in communication with the CRM, configured to match corresponding portions of the DMFs; the CRM further comprising computer executable instructions that configure one or more to perform the instructions comprising: (a) receiving at least one first and at least one second DMFs by the receiving module; (b) distinguishing between the different shots to determine shot separators by the matching module in each DMF; (c) determining shot correspondence between the shots in the first DMF and the second DMF, to determine matched shots by the matching module; (d) determining frame correspondence between the frames in the first DMF and second DMF, within the matched shots to determine matched frames by the matching module; (e) detect differences between the DMFs in level selected from a group consisting of: entire DMF level, frame level, shot level, timing level, transition level, sound level and any combination thereof, by the analysis module; and, (f) logging the detected changes and/ or detected features comprising the changes.
It is another object of the present invention to disclose the system as defined in any of the above, wherein the processor is configured to graphically present at least a portion of the detected changes information visually in the RDMF;
It is another object of the present invention to disclose the system as defined in any of the above, wherein the processor is configured to present the changes graphically in the RDMF at occurrences of the changed feature selected from a group consisting of: all occurrences of the changed feature, user defined occurrences of the changed feature, from the location of the change and onward along the RDMF timeline, on occurrences of the changed feature in derivative versions of the RDMF, and any combination thereof.
It is another object of the present invention to disclose the system as defined in any of the above, wherein the system further comprises at least one annotation module operatively in communication with the processor, the annotation module is configured to generate at least one annotation in association with at least one DMF feature. It is another object of the present invention to disclose the system as defined in any of the above, wherein the annotation module is configured to generate one or more of the annotations in a manner selected from a group consisting of: automatically, manually, according to user defined criteria, and any combination thereof.
It is another object of the present invention to disclose the system as defined in any of the above, wherein the annotation module is configured to accept one or more annotations from at least one external source and associate them with defined features in the RDMF.
It is another object of the present invention to disclose the system as defined in any of the above, further comprising an annotation module configured to associate at least one annotation with at least one portion of the RDMF.
It is another object of the present invention to disclose the system as defined in any of the above, wherein the annotation module is configured to generate annotations for one or more detected changes in the RDMF.
It is another object of the present invention to disclose the system as defined in any of the above, wherein each generated annotation of the detected change comprises at least a portion of detected changed information.
It is another object of the present invention to disclose the system as defined in any of the above wherein the annotation module is further configured to index at least one detected changes, index at least one features comprising detected changes, or both.
It is another object of the present invention to disclose the system as defined in any of the above, wherein the annotations comprising at least one selected from a group consisting of: the logged changes, the recognized features comprising changes, the changes information, indexing of the annotation content, comment(s) inserted by at least one user, link(s), images, changes history, and any combination thereof.
It is another object of the present invention to disclose the system as defined in any of the above, wherein the annotation module is configured to edit at least one annotation by performing at least one selected from a group consisting of: adding annotation, deleting annotation, changing annotation details, manipulating graphical representation of the annotation, editing annotation content, editing annotation location on screen, editing annotation location along timeline, edit annotation image, connecting the annotation(s) to at least one second feature, and any combination thereof.
It is another object of the present invention to disclose the system as defined in any of the above, wherein the annotation module is further configured to allow a user by means of a user interface, to edit at least one of the annotations.
It is another object of the present invention to disclose the system as defined in any of the above, wherein the annotation module is configured to generate differential graphical representation of annotations relating to their version source and/ or their place in the changes hierarchy.
It is another object of the present invention to disclose the system as defined in any of the above, wherein the annotation module is configured to generate a task list comprising one or more of the annotations;
It is another object of the present invention to disclose the system as defined in any of the above, wherein the annotation module is configured to be updated following at least one event selected from a group consisting of: detecting a change in the DMF, editing of the annotation by at least one user, and any combination thereof.
It is another object of the present invention to disclose the system as defined in any of the above, wherein the annotation module is configured to forward the task list to at least one third party;
It is another object of the present invention to disclose the system as defined in any of the above, wherein the annotation module is configured to distribute a different set the annotations into different final reviewed digital media files (RDMFs) according to user defined criteria.
It is another object of the present invention to disclose the system as defined in any of the above, wherein the annotation module is configured to index the annotations according to at least one selected from a group consisting of: user defined information, the changes information, the feature comprising change, DMF version, timing within DMF, the annotation generation date, the annotation editing date, the annotation tag associated by a user, and any combination thereof.
It is another object of the present invention to disclose the system as defined in any of the above, wherein the processor is configured to search annotations according to at least one annotation indexing.
It is another object of the present invention to disclose the system as defined in any of the above, wherein the system is configured to generate a history database of the detected changes information in at least one DMF in comparison to at least one other DMF selected from a group consisting of: all received DMF versions, at least one second DMF, a user selected DMF, a most recently received DMF, the first received DMF, at least a portion of a selected DMF, and any combination thereof.
It is another object of the present invention to disclose the system as defined in any of the above, wherein the processor is further configured to generate at least one DMF by: combining at least a portion of different DMFs, combining at least one first and at least one second portion of the same DMF, extract at least one portion from the DMF, and any combination thereof.
It is another object of the present invention to disclose the system as defined in any of the above, wherein the processor is further configured to generate at least one RDMF by: combining at least a portion of different RDMFs, combining at least one first and at least one second portion of the same RDMF, extract at least one portion from the RDMF, and any combination thereof .
It is another object of the present invention to disclose the system as defined in any of the above, wherein the system further comprises at least one rendering module configured to render the RDMF according to user defined criteria.
It is another object of the present invention to disclose the system as defined in any of the above, wherein the receiving module is configured to accept the DMF in a rendered form.
It is another object of the present invention to disclose the system as defined in any of the above, wherein the receiving module is configured to accept the DMF in a compressed form and extract the rendered form.
The present invention provides a computer-readable storage medium having stored therein a computer program loadable into a processor of a communication system; the communication system comprising a communication network attached to one or more end users; wherein the computer program comprises code adapted to perform a method for reviewing a plurality of sequential Digital Media Files (DMFs); the method comprising: (a) receiving at least one first and at least one second DMFs from at least one user; (b) comparing the visual and/ or audio content of each of the DMFs and detect visual and/or audio changes between the DMFs by means of the analysis module; (c) recognizing one or more features comprising the detected changes by means of the recognition module; (d) logging the detected changes information and the recognition information; (e) generating at least one reviewed digital media file (RDMF) comprising the second DMF configured to present at least a portion the detected changes information and the feature recognition in comparison to at least one first DMF; and, (f) forwarding the at least one RDMF to at least one recipient; (g) wherein the step (e) additionally comprising the steps of: (a) generating at least one RDMF for every DMF version received in comparison to at least one previous RDMF; and, (b) adding the newly generated changed information to previous the changes information thereby generating a RDMF comprising sequential hierarchy and/ or history of detected changes and/ or the changed feature recognition between different sequential DMFs.
It is another object of the present invention to disclose the method as defined in any of the above, additionally comprising the step of presenting at least a portion of the changed features hierarchy and/or history in a selected from a group consisting of: at least a portion of a user interface, in an external editing software, graphically presented in the RDMF, in a separate file, in a file configured as task list, and any combination thereof.
It is another object of the present invention to disclose the method as defined in any of the above, additionally comprising the step of the analysis module detecting visual and/ or sound changes between the DMFs at the level of: the entire DMFs, at least a portion of the DMFs, the DMF shot level, the DMF frame level, the DMF shot and or audio track transition level, and any combination thereof.
It is another object of the present invention to disclose the method as defined in any of the above, additionally comprising the step of the recognition module generating at least one fingerprint of a recognized feature comprising tolerances of feature characteristics enabling recognition.
It is another object of the present invention to disclose the method as defined in any of the above, additionally comprising the step of relating at least a portion of the information of at least one detected change to all of the changed feature occurrences recognized by the recognition module in the same the DMF.
It is another object of the present invention to disclose the method as defined in any of the above, additionally comprising the step of searching and optionally associating an annotation and/ or at least a portion of the changed information, to a feature in the DMF the RDMF, or both. It is another object of the present invention to disclose the method as defined in any of the above, additionally comprising the step of generating at least one retrievable memory log comprising hierarchy and/ or history of detected changes between a plurality of sequential DMFs.
It is another object of the present invention to disclose the method as defined in any of the above, additionally comprising the step of generating at least one retrievable memory log comprising hierarchy and/ or history of recognized features comprising changes between a plurality of sequential DMFs.
It is another object of the present invention to disclose the method as defined in any of the above, additionally comprising the step of generating at least one retrievable memory log comprising details of recognized features of at least one DMF.
It is another object of the present invention to disclose the method as defined in any of the above, further wherein the step of comparing of the visual and/or audio content of each DMFs follows performing at least one of: (a) determining shot separation within each DMF; (b) determining the location along the timeline of each shot; (c) determining the time length of each shot; (d) determining the visual characteristics of each DMF frame; (e) determining the sound characteristics of each DMF frame; (f) determining at least one shot transitions characteristics of each DMF; (g) determining the time length of the entire DMF for each DMF; (h) matching the frames, shots, or both between at least one first DMF and one second DMF; (i) determining frame correspondence between at least one first DMF frame and at least one second DMF frame; (j) determining shot correspondence between at least one first DMF shot and at least one second DMF shots; (k) determining sound characteristic correspondence between at least one first DMF and at least one second said DMF frames, shots, or both; (1) determining visual characteristics correspondence between at least one first said DMF and at least one second said DMF frames, shots, or both; (m) determining shot transition characteristics correspondence between at least one first DMF and at least one second DMF; (n) determining timing characteristics correspondence between at least one first DMF and at least one second DMF; and, (o) any combination thereof.
It is another object of the present invention to disclose the method as defined in any of the above, wherein the step of detecting changes additionally comprises the steps of: (a) distinguishing between the different shots to determine shot separators in each DMF; (b) determining shot correspondence between the shots in the first DMF and second DMF, to determine matched shots; (c) determining frame correspondence between the frames in the first DMF and second DMF, within the matched shots to determine matched frames; and, (d) detecting differences between the DMFs in level selected from a group consisting of: entire DMF level, frame level, shot level, timing level, transition level, sound level and any combination thereof, according to the correspondence.
It is another object of the present invention to disclose the method as defined in any of the above, additionally comprising the step of graphically presenting at least a portion of the detected changes information visually in the RDMF;
It is another object of the present invention to disclose the method as defined in any of the above, additionally comprising the step of presenting the changes graphically in the RDMF at occurrences of the changed feature selected from a group consisting of: all occurrences of the changed feature, user defined occurrences of the changed feature, from the location of the change and onward along the RDMF timeline, on occurrences of the changed feature in derivative versions of the RDMF, and any combination thereof.
It is another object of the present invention to disclose the method as defined in any of the above, additionally comprising the step of further providing the system with at least one annotation module operatively in communication with the processor, the annotation module is configured to generate at least one annotation in association with at least one DMF feature.
It is another object of the present invention to disclose the method as defined in any of the above, additionally comprising the step of the annotation module generating one or more of the annotations in a manner selected from a group consisting of: automatically, manually, according to user defined criteria, and any combination thereof.
It is another object of the present invention to disclose the method as defined in any of the above, additionally comprising the step of the annotation module accepting one or more annotations from at least one external source and associating them with defined features in the RDMF.
It is another object of the present invention to disclose the method as defined in any of the above, additionally comprising the step of the annotation module associating at least one annotation with at least one portion of the RDMF. It is another object of the present invention to disclose the method as defined in any of the above, additionally comprising the step of the annotation generating annotations for one or more detected changes in the RDMF.
It is another object of the present invention to disclose the method as defined in any of the above, additionally comprising the step of each generated annotation of the detected change comprises at least a portion of detected changed information.
It is another object of the present invention to disclose the method as defined in any of the above, additionally comprising the step of the annotation module indexing at least one detected changes, indexing at least one features comprising detected changes, or both.
It is another object of the present invention to disclose the method as defined in any of the above, wherein the annotations comprising at least one selected from a group consisting of: the logged changes, the recognized features comprising changes, the changes information, indexing of the annotation content, comment(s) inserted by at least one user, link(s), images, changes history, and any combination thereof.
It is another object of the present invention to disclose the method as defined in any of the above, additionally comprising the step of: the annotation module editing at least one annotation by performing at least one selected from a group consisting of: adding annotation, deleting annotation, changing annotation details, manipulating graphical representation of the annotation, editing annotation content, editing annotation location on screen, editing annotation location along timeline, edit annotation image, connecting the annotation(s) to at least one second feature, and any combination thereof.
It is another object of the present invention to disclose the method as defined in any of the above, additionally comprising the step of the annotation module enabling a user by means of a user interface, to edit at least one of the annotations.
It is another object of the present invention to disclose the method as defined in any of the above, additionally comprising the step of the annotation module generating differential graphical representation of annotations relating to their version source and/ or their place in the changes hierarchy. It is another object of the present invention to disclose the method as defined in any of the above, additionally comprising the step of the annotation module generating a task list comprising one or more of the annotations;
It is another object of the present invention to disclose the method as defined in any of the above, additionally comprising the step of the annotation module updating the task list following at least one event selected from a group consisting of: detecting a change in the DMF, editing of the annotation by at least one user, and any combination thereof.
It is another object of the present invention to disclose the method as defined in any of the above, additionally comprising the step of the annotation module forwarding the task list to at least one third party;
It is another object of the present invention to disclose the method as defined in any of the above, additionally comprising the step of the annotation module distributing a different set the annotations into different final reviewed digital media files (RDMFs) according to user defined criteria.
It is another object of the present invention to disclose the method as defined in any of the above, additionally comprising the step of the annotation module indexing the annotations according to at least one selected from a group consisting of: user defined information, the changes information, the feature comprising change, DMF version, timing within DMF, the annotation generation date, the annotation editing date, the annotation tag associated by a user, and any combination thereof.
It is another object of the present invention to disclose the method as defined in any of the above, additionally comprising the step of the processor searching the annotations according to at least one annotation indexing.
It is another object of the present invention to disclose the method as defined in any of the above, additionally comprising the step of the system generating a history database of the detected changes information in at least one DMF in comparison to at least one other DMF selected from a group consisting of: all received DMF versions, at least one second DMF, a user selected DMF, a most recently received DMF, the first received DMF, at least a portion of a selected DMF, and any combination thereof. It is another object of the present invention to disclose the method as defined in any of the above, additionally comprising the step of the processor generating at least one DMF by: combining at least a portion of different DMFs, combining at least one first and at least one second portion of the same DMF, extract at least one portion from the DMF, and any combination thereof.
It is another object of the present invention to disclose the method as defined in any of the above, additionally comprising the step of the processor generating at least one RDMF by: combining at least a portion of different RDMFs, combining at least one first and at least one second portion of the same RDMF, extract at least one portion from the RDMF, and any combination thereof .
It is another object of the present invention to disclose the method as defined in any of the above, additionally comprising the step of providing the system further comprising at least one rendering module configured to render the RDMF according to user defined criteria.
It is another object of the present invention to disclose the method as defined in any of the above, additionally comprising the step of the receiving module accepting the DMF in a rendered form. It is another object of the present invention to disclose the method as defined in any of the above, additionally comprising the step of the receiving module accepting the DMF in a compressed form and extracting the rendered form.
The present invention provides a computer implemented method for reviewing of Digital Media Files (DMF) received from at least one user-end, via a communication system, to at least one server computer comprising the steps of: (a) providing: (i) the server comprising at least one memory storage operatively coupled with at least one processor; (ii) at least one receiving module operatively in communication with the server configured to receive first and at least one second DMFs from at least one user; (iii) at least one analysis module operatively in communication with the server configured to detect visual and/or audio changes between the DMFs; and, (iv) at least one recognition module operatively in communication with the server configured to recognize features comprising changes between the DMFs; (b) receiving at least one first and at least one second DMFs by means of the receiving module; (c) comparing the visual and/ or audio content of each of the DMFs and detect visual and/or sound changes between the DMFs by means of the analysis module; (d) recognizing features comprising the detected changes by means of the recognition module; (e) logging the detected changes information and the recognition information; (f) generating at least one reviewed digital media file (RDMF) comprising the second DMF configured to present at least a portion of the detected changes information and the changed feature recognition in comparison to at least one first DMF; and, (g) forward the at least one RDMF to at least one recipient; wherein the step (e) additionally comprising the steps of: (a) generating at least one RDMF for every DMF version received in comparison to at least one previous RDMF; and, (b) adding the newly generated changed information to previous the changes information thereby generating a RDMF comprising sequential hierarchy and/ or history of detected changes and/ or the recognized features between different sequential DMFs.
It is another object of the present invention to disclose the method as defined in any of the above, additionally comprising the step of presenting at least a portion of the changed features hierarchy and/or history in a selected from a group consisting of: at least a portion of a user interface, in an external editing software, graphically presented in the RDMF, in a separate file, in a file configured as task list, and any combination thereof.
It is another object of the present invention to disclose the method as defined in any of the above, additionally comprising the step of the analysis module detecting visual and/ or sound changes between the DMFs at the level of: the entire DMFs, at least a portion of the DMFs, the DMF shot level, the DMF frame level, the DMF shot and or audio track transition level, and any combination thereof.
It is another object of the present invention to disclose the method as defined in any of the above, additionally comprising the step of the recognition module generating at least one fingerprint of a recognized feature comprising tolerances of feature characteristics enabling recognition.
It is another object of the present invention to disclose the method as defined in any of the above, additionally comprising the step of relating at least a portion of the information of at least one detected change to all of the changed feature occurrences recognized by the recognition module in the same the DMF.
It is another object of the present invention to disclose the method as defined in any of the above, additionally comprising the step of searching and optionally associating an annotation and/ or at least a portion of the changed information, to a feature in the DMF the RDMF, or both. It is another object of the present invention to disclose the method as defined in any of the above, additionally comprising the step of generating at least one retrievable memory log comprising hierarchy and/ or history of detected changes between a plurality of sequential DMFs.
It is another object of the present invention to disclose the method as defined in any of the above, additionally comprising the step of generating at least one retrievable memory log comprising hierarchy and/ or history of recognized features comprising changes between a plurality of sequential DMFs.
It is another object of the present invention to disclose the method as defined in any of the above, additionally comprising the step of generating at least one retrievable memory log comprising details of recognized features of at least one DMF.
It is another object of the present invention to disclose the method as defined in any of the above, additionally comprising the step of said analysis module further performing at least one of the following instructions: (a) determining shot separation within each DMF; (b) determining the location along the timeline of each shot; (c) determining the time length of each shot; (d) determining the visual characteristics of each DMF frame; (e) determining the sound characteristics of each DMF frame; (f) determining at least one shot transitions characteristics of each DMF; (g) determining the time length of the entire DMF for each DMF; (h) matching the frames, shots, or both between at least one first DMF and one second DMF; (i) determining frame correspondence between at least one first DMF frame and at least one second DMF frame; (j) determining shot correspondence between at least one first DMF shot and at least one second DMF shots; (k) determining sound characteristic correspondence between at least one first DMF and at least one second said DMF frames, shots, or both; (1) determining visual characteristics correspondence between at least one first said DMF and at least one second said DMF frames, shots, or both; (m) determining shot transition characteristics correspondence between at least one first DMF and at least one second DMF; (n) determining timing characteristics correspondence between at least one first DMF and at least one second DMF; and, (o) any combination thereof.
It is another object of the present invention to disclose the method as defined in any of the above, said step of detecting changes by said analysis module additionally comprising the steps of: further providing at least one matching module, operatively in communication with said server, configured to match corresponding portions of said DMFs; further wherein said server to perform said instructions comprising: (a) receive at least one first and at least one second DMFs by said receiving module; (b) distinguish between said different shots to determine shot separators by said matching module in each said DMF ; (c) determine shot correspondence between said shots in said first said DMF and second said second DMF, to determine matched shots by said matching module; (d) determine frame correspondence between said frames in said first said DMF and second said second DMF, within said matched shots to determine matched frames by said matching module; (e) detect differences between said DMFs in level selected from a group consisting of: entire DMF level, frame level, shot level, timing level, transition level, sound level and any combination thereof, by said analysis module; and, (f) log said detected changes and/ or detected features comprising said changes.
It is another object of the present invention to disclose the method as defined in any of the above, additionally comprising the step of graphically presenting at least a portion of the detected changes information visually in the RDMF.
It is another object of the present invention to disclose the method as defined in any of the above, additionally comprising the step of presenting the changes graphically in the RDMF at occurrences of the changed feature selected from a group consisting of: all occurrences of the changed feature, user defined occurrences of the changed feature, from the location of the change and onward along the RDMF timeline, on occurrences of the changed feature in derivative versions of the RDMF, and any combination thereof.
It is another object of the present invention to disclose the method as defined in any of the above, additionally comprising the step of further providing the system with at least one annotation module operatively in communication with the processor, the annotation module is configured to generate at least one annotation in association with at least one DMF feature.
It is another object of the present invention to disclose the method as defined in any of the above, additionally comprising the step of the annotation module generating one or more of the annotations in a manner selected from a group consisting of: automatically, manually, according to user defined criteria, and any combination thereof.
It is another object of the present invention to disclose the method as defined in any of the above, additionally comprising the step of the annotation module accepting one or more annotations from at least one external source and associating them with defined features in the RDMF. It is another object of the present invention to disclose the method as defined in any of the above, additionally comprising the step of the annotation module associating at least one annotation with at least one portion of the RDMF.
It is another object of the present invention to disclose the method as defined in any of the above, additionally comprising the step of the annotation generating annotations for one or more detected changes in the RDMF.
It is another object of the present invention to disclose the method as defined in any of the above, additionally comprising the step of each generated annotation of the detected change comprises at least a portion of detected changed information.
It is another object of the present invention to disclose the method as defined in any of the above, additionally comprising the step of the annotation module indexing at least one detected changes, indexing at least one features comprising detected changes, or both.
It is another object of the present invention to disclose the method as defined in any of the above, wherein the annotations comprising at least one selected from a group consisting of: the logged changes, the recognized features comprising changes, the changes information, indexing of the annotation content, comment(s) inserted by at least one user, link(s), images, changes history, and any combination thereof.
It is another object of the present invention to disclose the method as defined in any of the above, additionally comprising the step of: the annotation module editing at least one annotation by performing at least one selected from a group consisting of: adding annotation, deleting annotation, changing annotation details, manipulating graphical representation of the annotation, editing annotation content, editing annotation location on screen, editing annotation location along timeline, edit annotation image, connecting the annotation(s) to at least one second feature, and any combination thereof.
It is another object of the present invention to disclose the method as defined in any of the above, additionally comprising the step of the annotation module enabling a user by means of a user interface, to edit at least one of the annotations.
It is another object of the present invention to disclose the method as defined in any of the above, additionally comprising the step of the annotation module generating differential graphical representation of annotations relating to their version source and/ or their place in the changes hierarchy.
It is another object of the present invention to disclose the method as defined in any of the above, additionally comprising the step of the annotation module generating a task list comprising one or more of the annotations;
It is another object of the present invention to disclose the method as defined in any of the above, additionally comprising the step of the annotation module updating the task list following at least one event selected from a group consisting of: detecting a change in the DMF, editing of the annotation by at least one user, and any combination thereof.
It is another object of the present invention to disclose the method as defined in any of the above, additionally comprising the step of the annotation module forwarding the task list to at least one third party;
It is another object of the present invention to disclose the method as defined in any of the above, additionally comprising the step of the annotation module distributing a different set the annotations into different final reviewed digital media files (RDMFs) according to user defined criteria.
It is another object of the present invention to disclose the method as defined in any of the above, additionally comprising the step of the annotation module indexing the annotations according to at least one selected from a group consisting of: user defined information, the changes information, the feature comprising change, DMF version, timing within DMF, the annotation generation date, the annotation editing date, the annotation tag associated by a user, and any combination thereof
It is another object of the present invention to disclose the method as defined in any of the above, additionally comprising the step of the processor searching the annotations according to at least one annotation indexing.
It is another object of the present invention to disclose the method as defined in any of the above, additionally comprising the step of the system generating a history database of the detected changes information in at least one DMF in comparison to at least one other DMF selected from a group consisting of: all received DMF versions, at least one second DMF, a user selected DMF, a most recently received DMF, the first received DMF, at least a portion of a selected DMF, and any combination thereof.
It is another object of the present invention to disclose the method as defined in any of the above, additionally comprising the step of the processor generating at least one DMF by: combining at least a portion of different DMFs, combining at least one first and at least one second portion of the same DMF, extract at least one portion from the DMF, and any combination thereof.
It is another object of the present invention to disclose the method as defined in any of the above, additionally comprising the step of the processor generating at least one RDMF by: combining at least a portion of different RDMFs, combining at least one first and at least one second portion of the same RDMF, extract at least one portion from the RDMF, and any combination thereof .
It is another object of the present invention to disclose the method as defined in any of the above, additionally comprising the step of providing the system further comprising at least one rendering module configured to render the RDMF according to user defined criteria.
It is another object of the present invention to disclose the method as defined in any of the above, additionally comprising the step of the receiving module accepting the DMF in a rendered form. It is another object of the present invention to disclose the method as defined in any of the above, additionally comprising the step of the receiving module accepting the DMF in a compressed form and extracting the rendered form.
The present invention provides a computer implemented method for detecting and annotating changes between two sequential versions of Digital Media Files (DMFs), the DMFs each comprising one or more shots, each shot comprising one or more frames, the method characterized by the steps of: (a) providing: (i) at least one receiving module configured to receive first and at least one second DMFs; (ii) at least one matching module configured to match corresponding portions of the DMFs; (iii) at least one analysis module configured to detect visual and/or audio changes between the DMFs, (iv) at least one recognition module configured to recognize features comprising changes between the DMFs; (v) at least one annotation module configured to generate at least one annotation; and, (vi) a non-transitory computer readable storage medium (CRM), operatively in communication with the receiving module, the analysis module, the matching module, annotation module and the recognition module, the CRM having computer executable instructions that configure one or more operatively coupled processors to perform the instructions comprising: (1) receive at least one first and at least one second DMFs by the receiving module; (2) distinguish between the different shots to determine shot separators by the matching module in each DMF; (3) determine shot correspondence between the shots in the first DMF and second DMF, to determine matched shots by the matching module; (4) determine frame correspondence between the frames in the first DMF and second DMF, within the matched shots to determine matched frames by the matching module; (5) detect differences between the DMFs in level selected from a group consisting of: entire DMF level, frame level, shot level, timing level, transition level, sound level and any combination thereof, by the analysis module; (6) recognize one or more features associated with a detected difference, by the recognition module; (7) log the detected changes and/ or detected features comprising the changes; (8) generate at least one reviewed digital media file (RDMF) comprising the second DMF configured to present at least a portion of the detected changes information and the feature recognition of in comparison to at least one first DMF; and, (9) annotate each detected change by the annotation module; further wherein the annotation of detected change feature is transferred to all of the changed feature derivatives in the RDMF and in sequential derivative RDMFs.
It is another object of the present invention to disclose the method as defined in any of the above, wherein the correspondence of a shot and /or a frame comprises correspondence of at least one characteristic selected from a group consisting of: sound characteristics, visual characteristics, timing characteristics, transition characteristics, textual characteristics, 3D object characteristics, 2D object characteristics, special effects characteristics, and any combination thereof,
It is another object of the present invention to disclose the method as defined in any of the above, additionally comprising the step of determining area correspondence within matched frames.
It is another object of the present invention to disclose the method as defined in any of the above, additionally comprising the step of presenting at least a portion of the changed features hierarchy and/or history in a selected from a group consisting of: at least a portion of a user interface, in an external editing software, graphically presented in the RDMF, in a separate file, in a file configured as task list, and any combination thereof.
It is another object of the present invention to disclose the method as defined in any of the above, additionally comprising the step of the analysis module detecting visual and/ or sound changes between the DMFs at the level of: the entire DMFs, at least a portion of the DMFs, the DMF shot level, the DMF frame level, the DMF shot and or audio track transition level, and any combination thereof.
It is another object of the present invention to disclose the method as defined in any of the above, additionally comprising the step of the recognition module generating at least one fingerprint of a recognized feature comprising tolerances of feature characteristics enabling recognition.
It is another object of the present invention to disclose the method as defined in any of the above, additionally comprising the step of relating at least a portion of the information of at least one detected change to all of the changed feature occurrences recognized by the recognition module in the same the DMF.
It is another object of the present invention to disclose the method as defined in any of the above, additionally comprising the step of searching and optionally associating an annotation and/ or at least a portion of the changed information, to a feature in the DMF the RDMF, or both.
It is another object of the present invention to disclose the method as defined in any of the above, additionally comprising the step of generating at least one retrievable memory log comprising hierarchy and/ or history of detected changes between a plurality of sequential DMFs.
It is another object of the present invention to disclose the method as defined in any of the above, additionally comprising the step of generating at least one retrievable memory log comprising hierarchy and/ or history of recognized features comprising changes between a plurality of sequential DMFs.
It is another object of the present invention to disclose the method as defined in any of the above, additionally comprising the step of generating at least one retrievable memory log comprising details of recognized features of at least one DMF.
It is another object of the present invention to disclose the method as defined in any of the above, additionally comprising the step of the analysis module performing analysis of the visual and/or audio content of each DMFs by performing at least one of the following instructions: (a) determining shot separation within each DMF; (b) determining the location along the timeline of each shot; (c) determining the time length of each shot; (d) determining the visual characteristics of each DMF frame; (e) determining the sound characteristics of each DMF frame; (f) determining at least one shot transitions characteristics of each DMF; (g) determining the time length of the entire DMF for each DMF; (h) matching the frames, shots, or both between at least one first DMF and one second DMF; (i) determining frame correspondence between at least one first DMF frame and at least one second DMF frame; (j) determining shot correspondence between at least one first DMF shot and at least one second DMF shots; (k) determining sound characteristic correspondence between at least one first DMF and at least one second said DMF frames, shots, or both; (1) determining visual characteristics correspondence between at least one first said DMF and at least one second said DMF frames, shots, or both; (m) determining shot transition characteristics correspondence between at least one first DMF and at least one second DMF; (n) determining timing characteristics correspondence between at least one first DMF and at least one second DMF; and, (o) any combination thereof.
It is another object of the present invention to disclose the method as defined in any of the above, additionally comprising the step of graphically presenting at least a portion of the detected changes information visually in the RDMF.
It is another object of the present invention to disclose the method as defined in any of the above, additionally comprising the step of presenting the changes graphically in the RDMF at occurrences of the changed feature selected from a group consisting of: all occurrences of the changed feature, user defined occurrences of the changed feature, from the location of the change and onward along the RDMF timeline, on occurrences of the changed feature in derivative versions of the RDMF, and any combination thereof.
It is another object of the present invention to disclose the method as defined in any of the above, additionally comprising the step of the annotation module generating at least one annotation in association with at least one DMF feature.
It is another object of the present invention to disclose the method as defined in any of the above, additionally comprising the step of the annotation module generating one or more of the annotations in a manner selected from a group consisting of: automatically, manually, according to user defined criteria, and any combination thereof.
It is another object of the present invention to disclose the method as defined in any of the above, additionally comprising the step of the annotation module accepting one or more annotations from at least one external source and associating them with defined features in the RDMF. It is another object of the present invention to disclose the method as defined in any of the above, additionally comprising the step of the annotation module associating at least one annotation with at least one portion of the RDMF.
It is another object of the present invention to disclose the method as defined in any of the above, additionally comprising the step of the annotation generating annotations for one or more detected changes in the RDMF.
It is another object of the present invention to disclose the method as defined in any of the above, additionally comprising the step of each generated annotation of the detected change comprises at least a portion of detected changed information.
It is another object of the present invention to disclose the method as defined in any of the above, additionally comprising the step of the annotation module indexing at least one detected changes, indexing at least one features comprising detected changes, or both.
It is another object of the present invention to disclose the method as defined in any of the above, wherein the annotations comprising at least one selected from a group consisting of: the logged changes, the recognized features comprising changes, the changes information, indexing of the annotation content, comment(s) inserted by at least one user, link(s), images, changes history, and any combination thereof.
It is another object of the present invention to disclose the method as defined in any of the above, additionally comprising the step of: the annotation module editing at least one annotation by performing at least one selected from a group consisting of: adding annotation, deleting annotation, changing annotation details, manipulating graphical representation of the annotation, editing annotation content, editing annotation location on screen, editing annotation location along timeline, edit annotation image, connecting the annotation(s) to at least one second feature, and any combination thereof.
It is another object of the present invention to disclose the method as defined in any of the above, additionally comprising the step of the annotation module enabling a user by means of a user interface, to edit at least one of the annotations.
It is another object of the present invention to disclose the method as defined in any of the above, additionally comprising the step of the annotation module generating differential graphical representation of annotations relating to their version source and/ or their place in the changes hierarchy.
It is another object of the present invention to disclose the method as defined in any of the above, additionally comprising the step of the annotation module generating a task list comprising one or more of the annotations;
It is another object of the present invention to disclose the method as defined in any of the above, additionally comprising the step of the annotation module updating the task list following at least one event selected from a group consisting of: detecting a change in the DMF, editing of the annotation by at least one user, and any combination thereof.
It is another object of the present invention to disclose the method as defined in any of the above, additionally comprising the step of the annotation module forwarding the task list to at least one third party;
It is another object of the present invention to disclose the method as defined in any of the above, additionally comprising the step of the annotation module distributing a different set the annotations into different final reviewed digital media files (RDMFs) according to user defined criteria.
It is another object of the present invention to disclose the method as defined in any of the above, additionally comprising the step of the annotation module indexing the annotations according to at least one selected from a group consisting of: user defined information, the changes information, the feature comprising change, DMF version, timing within DMF, the annotation generation date, the annotation editing date, the annotation tag associated by a user, and any combination thereof.
It is another object of the present invention to disclose the method as defined in any of the above, additionally comprising the step of the processor searching the annotations according to at least one annotation indexing.
It is another object of the present invention to disclose the method as defined in any of the above, additionally comprising the step of the system generating a history database of the detected changes information in at least one DMF in comparison to at least one other DMF selected from a group consisting of: all received DMF versions, at least one second DMF, a user selected DMF, a most recently received DMF, the first received DMF, at least a portion of a selected DMF, and any combination thereof.
It is another object of the present invention to disclose the method as defined in any of the above, additionally comprising the step of the processor generating at least one DMF by: combining at least a portion of different DMFs, combining at least one first and at least one second portion of the same DMF, extract at least one portion from the DMF, and any combination thereof.
It is another object of the present invention to disclose the method as defined in any of the above, additionally comprising the step of the processor generating at least one RDMF by: combining at least a portion of different RDMFs, combining at least one first and at least one second portion of the same RDMF, extract at least one portion from the RDMF, and any combination thereof .
It is another object of the present invention to disclose the method as defined in any of the above, additionally comprising the step of providing the system further comprising at least one rendering module configured to render the RDMF according to user defined criteria.
It is another object of the present invention to disclose the method as defined in any of the above, additionally comprising the step of the receiving module accepting the DMF in a rendered form. It is another object of the present invention to disclose the method as defined in any of the above, additionally comprising the step of the receiving module accepting the DMF in a compressed form and extracting the rendered form.
The present invention provides a detection and annotation system, useful for detecting and annotating changes between two sequential versions of Digital Media Files (DMFs), the DMFs each comprising one or more shots, each shot comprising one or more frames, the system comprises: (a) at least one receiving module operatively in communication with the server configured to receive first and at least one second DMFs; (b) at least one matching module configured to match corresponding portions of the DMFs; (c) at least one analysis module configured to detect visual and/or audio changes between the DMFs; (d) at least one recognition module configured to recognize features comprising changes between the DMFs; (e) at least one annotation module configured to generate at least one annotation; and, (f) a non-transitory computer readable storage medium (CRM) in communication with the receiving module, matching module, the analysis module, the recognition module, and the annotation module; the CRM, operatively coupled to at least one processors, having computer executable instructions that configure one or more to perform the instructions comprising: (i) receiving at least one first and at least one second DMFs by the receiving module; (ii) distinguishing between the different shots to determine shot separators by the matching module in each DMF; (iii) determining shot correspondence between the shots in the first DMF and second DMF, to determine matched shots by the matching module; (iv) determining frame correspondence between the frames in the first DMF and second DMF, within the matched shots to determine matched frames by the matching module; (v) detect differences between the DMFs in level selected from a group consisting of: entire DMF level, frame level, shot level, timing level, transition level, sound level and any combination thereof, by the analysis module; (vi) recognizing one or more features associated with a detected difference, by the recognition module; (vii) logging the detected changes and/ or detected features comprising the changes; (viii) generating at least one reviewed digital media file (RDMF) comprising the second DMF configured to present at least a portion of the detected changes information and the feature recognition of in comparison to at least one first DMF; (ix) annotating each detected change by the annotation module; further wherein the annotation of detected change feature is transferred to all of the changed feature derivative files in the DMF and sequential DMFs.
It is another object of the present invention to disclose the system as defined in any of the above, wherein the correspondence of a shot and /or a frame comprises correspondence of at least one characteristic selected from a group consisting of: sound characteristics, visual characteristics, timing characteristics, transition characteristics, textual characteristics, 3D object characteristics, 2D object characteristics, special effects characteristics, and any combination thereof.
It is another object of the present invention to disclose the system as defined in any of the above, additionally comprising instruction of determine area correspondence within matched frames.
It is another object of the present invention to disclose the system as defined in any of the above, wherein the system is configured to present at least a portion of the changed features hierarchy and/or history in a selected from a group consisting of: at least a portion of a user interface, in an external editing software, graphically presented in the RDMF, in a separate file, in a file configured as task list, and any combination thereof.
It is another object of the present invention to disclose the system as defined in any of the above, wherein the analysis module is configured to detect visual and/ or sound changes between the DMFs at the level of: the entire DMFs, at least a portion of the DMFs, the DMF shot level, the DMF frame level, the DMF shot and or audio track transition level, and any combination thereof.
It is another object of the present invention to disclose the system as defined in any of the above, wherein the recognition module is configured to generate at least one fingerprint of a recognized feature comprising tolerances of feature characteristics enabling recognition.
It is another object of the present invention to disclose the system as defined in any of the above, wherein the processor is configured to relate at least a portion of the information of at least one detected change to all of the changed feature occurrences recognized by the recognition module in the same the DMF.
It is another object of the present invention to disclose the system as defined in any of the above, wherein the processor is configured to search and optionally associate an annotation and/ or at least a portion of the changed information, to a feature in the DMF the RDMF, or both.
It is another object of the present invention to disclose the system as defined in any of the above, wherein the system is configured to generate at least one retrievable memory log comprising hierarchy and/ or history of detected changes between a plurality of sequential DMFs.
It is another object of the present invention to disclose the system as defined in any of the above, wherein the system is configured to generate at least one retrievable memory log comprising hierarchy and/ or history of recognized features comprising changes between a plurality of sequential DMFs.
It is another object of the present invention to disclose the system as defined in any of the above, wherein the system is configured to generate at least one retrievable memory log comprising details of recognized features of at least one DMF.
It is another object of the present invention to disclose the system as defined in any of the above, wherein the analysis module is configured to analyze the visual and/or audio content of each of the DMFs by performing at least one of the following instructions: (a) determine shot separation within each DMF; (b) determine the location along the timeline of each shot; (c) determine the time length of each shot; (d) determine the visual characteristics of each DMF frame; (e) determine the sound characteristics of each DMF frame; (f) determine at least one shot transitions characteristics of each DMF; (g) determine the time length of the entire DMF for each DMF; (h) match the frames, shots, or both between at least one first DMF and one second DMF; (i) determine frame correspondence between at least one first DMF frame and at least one second DMF frame; (j) determine shot correspondence between at least one first DMF shot and at least one second DMF shots; (k) determine sound characteristic correspondence between at least one first DMF and at least one second said DMF frames, shots, or both; (1) determine visual characteristics correspondence between at least one first said DMF and at least one second said DMF frames, shots, or both; (m) determine shot transition characteristics correspondence between at least one first DMF and at least one second DMF; (n) determine timing characteristics correspondence between at least one first DMF and at least one second DMF; and, (o) any combination thereof.
It is another object of the present invention to disclose the system as defined in any of the above, wherein the processor is configured to graphically present at least a portion of the detected changes information visually in the RDMF;
It is another object of the present invention to disclose the system as defined in any of the above, wherein the processor is configured to present the changes graphically in the RDMF at occurrences of the changed feature selected from a group consisting of: all occurrences of the changed feature, user defined occurrences of the changed feature, from the location of the change and onward along the RDMF timeline, on occurrences of the changed feature in derivative versions of the RDMF, and any combination thereof.
It is another object of the present invention to disclose the system as defined in any of the above, wherein the system further comprises at least one annotation module operatively in communication with the processor, the annotation module is configured to generate at least one annotation in association with at least one DMF feature.
It is another object of the present invention to disclose the system as defined in any of the above, wherein the annotation module is configured to generate one or more of the annotations in a manner selected from a group consisting of: automatically, manually, according to user defined criteria, and any combination thereof.
It is another object of the present invention to disclose the system as defined in any of the above, wherein the annotation module is configured to generate one or more of the annotations in a manner selected from a group consisting of: automatically, manually, according to user defined criteria, and any combination thereof. It is another object of the present invention to disclose the system as defined in any of the above, wherein the annotation module is configured to accept one or more annotations from at least one external source and associate them with defined features in the RDMF.
It is another object of the present invention to disclose the system as defined in any of the above, further comprising an annotation module configured to associate at least one annotation with at least one portion of the RDMF.
It is another object of the present invention to disclose the system as defined in any of the above, wherein the annotation module is configured to generate annotations for one or more detected changes in the RDMF.
It is another object of the present invention to disclose the system as defined in any of the above, wherein each generated annotation of the detected change comprises at least a portion of detected changed information.
It is another object of the present invention to disclose the system as defined in any of the above wherein the annotation module is further configured to index at least one detected changes, index at least one features comprising detected changes, or both.
It is another object of the present invention to disclose the system as defined in any of the above, wherein the annotations comprising at least one selected from a group consisting of: the logged changes, the recognized features comprising changes, the changes information, indexing of the annotation content, comment(s) inserted by at least one user, link(s), images, changes history, and any combination thereof.
It is another object of the present invention to disclose the system as defined in any of the above, wherein the annotation module is configured to edit at least one annotation by performing at least one selected from a group consisting of: adding annotation, deleting annotation, changing annotation details, manipulating graphical representation of the annotation, editing annotation content, editing annotation location on screen, editing annotation location along timeline, edit annotation image, connecting the annotation(s) to at least one second feature, and any combination thereof.
It is another object of the present invention to disclose the system as defined in any of the above, wherein the annotation module is further configured to allow a user by means of a user interface, to edit at least one of the annotations. It is another object of the present invention to disclose the system as defined in any of the above, wherein the annotation module is configured to generate differential graphical representation of annotations relating to their version source and/ or their place in the changes hierarchy.
It is another object of the present invention to disclose the system as defined in any of the above, wherein said annotation module is configured to be updated following at least one event selected from a group consisting of: detecting a change in said DMF, editing of said annotation by at least one said user, and any combination thereof.
It is another object of the present invention to disclose the system as defined in any of the above, wherein said annotation module is configured to generate a task list comprising one or more of the annotations;
It is another object of the present invention to disclose the system as defined in any of the above, wherein the annotation module is configured to forward the task list to at least one third party;
It is another object of the present invention to disclose the system as defined in any of the above, wherein the annotation module is configured to distribute a different set the annotations into different final reviewed digital media files (RDMFs) according to user defined criteria.
It is another object of the present invention to disclose the system as defined in any of the above, wherein the annotation module is configured to index the annotations according to at least one selected from a group consisting of: user defined information, the changes information, the feature comprising change, DMF version, timing within DMF, the annotation generation date, the annotation editing date, the annotation tag associated by a user, and any combination thereof.
It is another object of the present invention to disclose the system as defined in any of the above, wherein the processor is configured to search annotations according to at least one annotation indexing.
It is another object of the present invention to disclose the system as defined in any of the above, wherein the system is configured to generate a history database of the detected changes information in at least one DMF in comparison to at least one other DMF selected from a group consisting of: all received DMF versions, at least one second DMF, a user selected DMF, a most recently received DMF, the first received DMF, at least a portion of a selected DMF, and any combination thereof. It is another object of the present invention to disclose the system as defined in any of the above, wherein the processor is further configured to generate at least one DMF by: combining at least a portion of different DMFs, combining at least one first and at least one second portion of the same DMF, extract at least one portion from the DMF, and any combination thereof.
It is another object of the present invention to disclose the system as defined in any of the above, wherein the processor is further configured to generate at least one RDMF by: combining at least a portion of different RDMFs, combining at least one first and at least one second portion of the same RDMF, extract at least one portion from the RDMF, and any combination thereof.
It is another object of the present invention to disclose the system as defined in any of the above, wherein the system further comprises at least one rendering module configured to render the RDMF according to user defined criteria.
It is another object of the present invention to disclose the system as defined in any of the above, wherein the receiving module is configured to accept the DMF in a rendered form.
It is another object of the present invention to disclose the system as defined in any of the above, wherein the receiving module is configured to accept the DMF in a compressed form and extract the rendered form.
BRIEF DESCRIPTION OF THE FIGURES
In the following detailed description of the preferred embodiments, reference is made to the accompanying drawings that form a part hereof, and in which are shown by way of illustration specific embodiments in which the invention may be practiced. It is understood that other embodiments may be utilized and structural changes may be made without departing from the scope of the present invention. The present invention may be practiced according to the claims without some or all of these specific details. For the purpose of clarity, technical material that is known in the technical fields related to the invention has not been described in detail so that the present invention is not unnecessarily obscured. In the accompanying drawing:
Fig. 1A is a schematic diagram of an embodiment of the digital media reviewing system; Fig. IB is a schematic diagram of an embodiment of the digital media reviewing system;
Fig. 2 is a schematic diagram of an embodiment of a method of digital media reviewing;
Fig. 3 is a schematic diagram of an embodiment of a system for implementing a method of digital media reviewing;
Fig. 4 is a schematic diagram of an embodiment of the annotating and change detecting method; and,
Fig. 5 is a schematic diagram of an embodiment of the annotating and change detecting system.
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS
In the following detailed description of the preferred embodiments, reference is made to the accompanying drawings that form a part hereof, and in which are shown by way of illustration specific embodiments in which the invention may be practiced. It is understood that other embodiments may be utilized and structural changes may be made without departing from the scope of the present invention. The present invention may be practiced according to the claims without some or all of these specific details. For the purpose of clarity, technical material that is known in the technical fields related to the invention has not been described in detail so that the present invention is not unnecessarily obscured.
The present invention provides a system and methods configured to create a flowing production process that will interact effectively with multiple parties performing editing and or reviewing of digital media and result in a professional media editing and/or reviewing system. This will enable each party to use their favorite editing means (any media software) media production software in order to pursue their artistic and technological interests without limitations to sharing and re-editing the media.
The essence of the present invention is to provide a system and a service that would support the production process while providing a highly effective versioning and reviewing capabilities.
Further in the scope of the present invention is a novel computer vision based algorithm to track down changes and match between different progressive versions of the daily media. This matching allows tracking down changes, realizing performed tasks, promoting missing comments from earlier versions, and creation of lineages of changed features along different versions on a timeline and thus supporting the creation process, without necessity of sharing the detailed source files.
The present invention complements existing automation tools by supporting the specialized content creation process, facilitating an easy to manage, integrate and collaborate system and methods, between the different users contributing to the creation of a digital media file, such as between the participating artists, supervisor and approval factors.
Further in the scope of the present invention is locating changes that were done since the last version and characterizing them, and the option to recognize the exact changes, and update/generate comments accordingly.
The term 'digital media file' interchangeably refers hereinafter to media encoded digitally including, but not limited to: film, video, digital video, movie, cinema, picture, motion pictures, moving image, television programs, radio programs, advertisements, audio recordings and /or compilations, photography, digital art, clip(s), music clips, animation, record, tape, digital presentation, and any combination thereof. The digital media is preferably a film. A film comprises at least one 'shot', each shot comprises at least one 'frame', typically a sequential series of frames. A shot is a continuous piece of video or film footage (e.g. between pressing "record" and "stop", or between two cuts). A frame is one of the many still images which compose the complete moving picture, typically in digital media it is a rectangular raster of pixels, either in an RGB color space or other color space.
The means and methods of the present invention are aimed at managing and reviewing the post production stage. Each contributor of a collaborative group of people can generate of contribute at least a portion of the post production stage. Further in the scope of the present invention is integrating, and reviewing these portions and portion versions can be integrated, reviewed, and tasks managed in the process of the digital media final file creation.
The term 'post production' refers hereinafter to process and means directed at editing and/ or reviewing digital media. These can include:
- Editing the visual image, in the frame level (picture), including image editing, color editing, changing at least one feature appearance, change at least a portion of an image including but limited to trimming, cutting, splitting, merging, rotating, inserting a new portion, applying a filter (e.g. artistic, blur, sharpen, emboss, sketch, and etc.), adding a layer, moving a portion, deleting, zooming, and others as known in the art of image processing.
- Editing the shots of a video - changing the length of a shot, cutting one or more portions of a shot, changing the visual parameters of an entire shot (e.g. color, background feature, image appearance), changing the location of the shot within the film timeframe, combining different frames and frame changes within the shot, trim, cut, split, connect portions, merge, delete, rotate and mix.
- Editing shot(s) transitions
- Adding visual special effects - mainly computer-generated imagery, this can be such as 3D and/ or 2D animations, 2D and/or 3D images, and/or image portions and or image layers. This can also include image processing means such as applying filters that enhance/ distort /manipulate / crop/ size change/ zoom and etc., at least a portion of the image.
- Editing the soundtrack of the video: composing, recording, trim, cut, split, merge, compress, delay, de-noise, de-spike, parametric gain, reverb, invert loop, normalizing, pitch shifting resampling, time based interpolation, time stretch, mix different tracks and edit the soundtrack.
- Adding special effects to the sound track including: sound design, sound effects, sound re- recording and/or mixing.
- composing the final timeline of the film - including but not limited to: deciding the ordering and placement of the different shots, determining the begging and the ending of the film.
The term 'feature' in reference to film / video / digital media refers herein after to any portion of the visual and/ or audio content of a film such as, but not limited to: at least a portion of the background image, at least a portion of the foreground image, animation, object, 3D animated object, 2D object, 3D object, character, titles, subtitles, actor, color, timespan, special effect, audio, shot, background audio, main track audio, noise, speaking sound, music, narration audio, lighting, artwork, angle of view, timing (of shots, objects, frames, image manipulations and etc.) ,and any combination thereof.
The term 'source material' refers hereinafter to originating source data of which the film is made, such as the 3D object files, the original layering , relation to each object separately, the source video material, the original sound track file, and the special effects scripts, including the metadata files of specifications and providing unique access to different portions and different parameters of each feature/object individually and/or together, and of the frame as they were defined and composed by artist such as the such as layers, objects effects, background, audio track audio effects and etc. Alternatively, the 'rendered version' of a file refers hereinafter to a version of a digital media file "flattened" into a final representation of the material, containing the final appearance and adjustments made up to that editing stage. In a rendered format the separations of objects and different objects is no longer present, and the file is related to as in its entirety.
The term 'Metadata' refers hereinafter to "data about data". Representing, about the design and specification of data structures and/or "data about the containers of data"; and descriptive metadata about individual instances of application data or the data content. In reference to the digital media file (e.g. video files), this can refer to such as timecode, localization, take number, name of the clip/ object, layer, coordinates, track, object properties and etc. When ingesting audio or video feeds, metadata are attached to the clip. The metadata can be attached automatically and/or manually. In an embodiment detection of changes, recognition of changes, annotations, or any other data relating to a digital media file can be configured or supported with a metadata file. Additionally or alternatively, the metadata file comprises annotation(s) in association with the digital media file. The annotation can include, but are not limited to: changes information, comments, notification of changes, indexes of changes, location of changes, detected features of detected changes, one or more characteristic in with the change is manifested, change date, change provider, link to another location in the digital media, link to an external program and/or memory, and any combination thereof.
The term "plurality" interchangeably refers hereinafter to an integer a, when a > I = 1.
The term "in communication", interchangeably refers hereinafter to any form of exchanging information between components of present invention and/ or between components of the present invention and external devices and/ or systems.
The term "computer" interchangeably refers herein to any an electronic device designed to accept data, perform prescribed mathematical and logical operations at high speed, and display the results of these operations device. It responds to a specific set of instructions in a well- defined manner and it can execute a prerecorded list of instructions (a program). A computer typically comprises a CPU, a memory (Computer readable media), and a user interface (input device e.g. keyboard, buttons, joystick, touch screen, control panel; output device e.g. screen, indicator and etc.), and can include a transmitting/ receiving module.
It is further in the scope of the present invention that one or more of the methods of the present invention is embodied as at least one of: a locally installed application, a web application, a hosted service, any remote access communication system, (utilizing a local and/or remote memory system) and the computing device on which it is installed is one of: a server, a desktop computer, a laptop computer, a tablet, a smart whiteboard, a smart phone, a smart watch, a video camera, a dedicated video editing device, a hand held device, microprocessor based or programmable consumer electronics, and any combination thereof.
Embodiments can be implemented such as a computer implemented process, a computing system, a communication system, and/ or as an article of manufacture as a computer program product, and/ or computer readable media. Additionally or alternatively, the means and methods of the present invention can be implemented in hardware, software and any combination thereof. Additionally or alternatively, the means and methods of the present invention can be implemented on a single computing device and/ or on a plurality of computing devices, utilizing memory locally and/ or remotely. Additionally or alternatively, a server can be implemented in a network environment or as a virtual server in a computing system and/ or device.
The term "processor", or "CPU", central processing unit, interchangeably refers hereinafter to any processor, the hardware that carries out the instructions of a program by performing the basic arithmetical, logical, and input/output operations of the system. A computer can have more than one CPU; this is called multiprocessing. The processor can be such as a microprocessors, multi- core processors a system on a chip (SoC), array processor, vector processor and etc. The CPU is typically connected to a memory unit (storage unit, a unit of a computer or an independent device designed to record, store, and reproduce information) configured to store and extrude information in various forms (e.g. a database).
The term "Computer readable media", (CRM), interchangeably refers hereinafter to a medium capable of storing data in a format readable by a mechanical device (automated data medium rather than human readable). Examples of machine-readable media include magnetic media such as magnetic disks, cards, flash memory, tapes, and drums, punched cards and paper tapes, optical disks, barcodes and magnetic ink characters. Common machine-readable technologies include magnetic recording, processing waveforms, and barcodes. Optical character recognition (OCR) can be used to enable machines to read information available to humans. Any information retrievable by any form of energy can be machine-readable.
The term "Communication system" interchangeably refers hereinafter to any system providing the passage of information from one end to at least one second end and vice versa. This includes: the Internet system, a local network, a local network including server architecture, a server client communication system, computer telecommunication network, data network and as such as known in the art of data communication.
The term "third party" interchangeably refers herein to any system, user, device, or individual. For example this includes a system or device logging the different task lists for statistical analysis, or work efficiency checkups, a back-up system, a user (e.g. artist, editor, manager, producer, stakeholder, expert, commissioner, client and etc.), a printing device, an external database, The third party is typically in communication with said system and/ or any modules, portions or parts thereof, preferably said annotation module of the present invention.
The term 'sound' refers hereinafter to any portion of a digital media file triggering a sensation produced by stimulation of the organs of hearing by vibrations transmitted through the air or other medium. This further includes the particular auditory effect produced by a given cause: such as: music, speech, synthetic sound, vocal utterance, animal sound, representation of a naturally occurring sound, beat, rhythm, tone, noise, any auditory effect; any audible vibrational disturbance, digitally recorded signal, non-digital recorded signal, audio relating to, or employed in the transmission, reception, or reproduction of sound; relating to frequencies or signals in the audible range, and any combination thereof.
Locating changes between two production versions is not a simple task as often the difference between versions may occur in different forms and with different side-effects both temporal and spatial. It is further in the scope of the present invention to locate and identify changes including: changes in the time level, changes in the shot level, changes in the frame level, changes in the sound level, changes in the visual level, changes in the transition between shots (transition level), changes in the layers, special effect changes, 2D animation changes, 3D animation changes, textual changes, and any combination thereof.
Changes in the time level include, changes in time related characteristics such as, but not limited to: the over-all time span of: a shot, a scene, a user defined portion of the film, the entire film; timing in the film and duration of: a transition between two shots, a transition between two animations, a shot, an appearance of at least one feature, a defined musical track, a defined sound segment, a user defined frame point of appearance, a user define audio segment, the duration of an audio (e.g. musical, voice) transition, and any combination thereof; the frame rate per second; This change is manifested by at least one of - trimming at least a portion of the film, shot and/ or transition, extending at least a portion of the film, shot and/ or transition, removing at least one frame, adding at least one frame, duplicating at least one frame, insertion of longer and/ or shorter transitions, changes in the audio content in reference to the time line (e.g. insertion, cutting expanding compressing at least a segment of audio, addition of a track), and any combination thereof.
Changes in the shot level include changes in any characteristic of the shot such as, but not limited to: change in the timing of the shot (the timing that the shot stars and /or ends , trimming and/ or extending at least a portion of a shot by for example adding and/or removing at least one frame resulting in changing the footage duration by spreading/compacting the frames in it, and /or ending with the same final film length), visual changes in the entire shot or in at least a portion of the shot, sound level changes in the entire shot or in at least a portion of the shot, the swapping of shots between locations along the film timeline (changing the order of the shots), deleting at least one shot, adding at least one shot, special effect changes applied to at least a portion of a shot or an entire shot, changes in the sound of at least a portion of a shot, changes in 2D and/ or 3D objects within at least a portion of a shot or an entire shot, changes in the FPS: Frames Per Second (the number of video or film frames which are displayed each second), and any combination thereof.
Changes between shots can be further identified as shots being characterized into shot types following composition analysis, such as: EWS (Extreme Wide Shot), VWS (Very Wide Shot), WS (Wide Shot), MS (Mid Shot), MCU (Medium Close Up), CU (Close Up), ECU (Extreme Close Up), Chokercut-in, Cut-In, CA (Cutaway), Two-Shot, (OSS) Over-the- Shoulder Shot, Noddy Shot, Point-of-View Shot (POV), Weather Shot, aerial shot, bird's-eye shot , the low- angle shot, over the shoulder shot, point of view shot, reverse shot, freeze frame shot, the insert shot, and other shots as known in the art of film making.
Changes in the frame level include changes in any frame characteristic when compared to a different media version such as, but not limited to: changes in the number of times a frame appears, frame deletion, frame addition, swapping in the location of a frame, changes in the visual content of at least a portion of the frame, changes in the audio (sound) content of the frame, frame duplication, special effects on at least a portion of the frame, layer changes within a frame, appearance/ disappearance of a 2D or 3D object within a frame, a visual change of a 2d and/or 3D object within a frame, text change, and any combination thereof.
Changes in the sound content include changes in any sound related characteristic such as, but not limited to: the sound track, changes in the sound pitch, changes in the sound tone, changes in the sound transitions, changes in the music, changes in the volume, changes in the relation between the sound tracks (for example dominance of music over speaking and etc.), changes in at least one sound track duration, changes in special effects of sound (e.g, applied sound filter), changes in the timing of a specific sound and/ or specific sound track, and/ or at least one specific sound portion, deletion of at least a portion of a sound track, changing the volume of at least a portion of a sound track (decibel change and/ or gain of a shot audio, timing of a sound peak within a defined portion, Reverberation change, identifying a flanging effect, amplified or attenuate signal change, stretching sound by addition and or duplications of such as notes, rhythms, transitions, loops, and etc., mixing at least a portion of at least one sound track, changes in noise ratio, insertion of synthetic sound, music, recording, voice, beat, noise, and /or blending any of them, fade changes in a shot or between shots, changes in the quality of the sound within at least a portion of the movie, Edit the start time, stop time, and duration of any sound on the audio timeline, changes that result in compression, expansion, of the sound portion, identification of a pattern in the appearance of a specific sound in relation to a specific feature (such as 2D or 3D object, visual component, text component, transition, shot type, character in film, layer, special effect ) and identification of a change thereof, and any combination thereof. This further includes changes located in the sound at the level of the entire film, at least a portion of the film, at least one shot, at least one frame, and any combination thereof.
Changes in the visual content include changes of any visual characteristic of at least a portion of a frame, such as, but not limited to changes in: color (hue, RGB or CMYK content and/or ratio, saturation), resolution, contras, luminance, applied filter (e.g. sharpen, blur, greyscale, posterize, artistic effect such as sketch, photocopy, sepia, fish eye, and any other filter or filter combination known in the art of image processing), zoom, texture, change of a predetermined amount of pixels in a predefined area can be considered as a change, trimming of at least a portion of an object (2D, 3D), at least a portion of a frame, at least a portion of a special effect addition, transformation of at least apportion of the frame image (e.g. crop, rotate, scale, skew, flip, duplicate, and others as known in the art), superimposing at least two images and / or image portions to generate one new image, images produced using bracketing (e.g. exposure, depth of field, ISO, white balance and etc.) combined to create a high dynamic range image that produces uniting different portions of the image having different parameters into one image, and any combination thereof.
Changes in at least one transition between shots include changes in any transition characteristic such as, but not limited to: the mixing of the sound between shots, the blending of images between shots, the occurrence of frames that are transition defined frames, the transition timing (length along the time line, start point and end point), the sound of at least one transition, special effects, location in timeline, swapping between transitions, deletion of a transition , trimming or extending a transition, visual change of at least a portion of a transition, sound content change of at least a portion of a transition, and any combination thereof.
Most of the videos created today, either 3D animation, live footage, Motion graphics or a combination of these, consists of compositing of 2 or more layers (usually the number of layers are significantly higher than 2). A basic composite will always include Foreground Layer blended over a Background Layer through the foreground's alpha channel. It is very common to change one or more of these layers during the post-production process. Moreover, as each of these layers presents objects/entities such as people, cars, sets, and many others, a change in the appearance of these entities, often generates a change in one or more layers. Changes in the layers include but are not limited to, background changes, foreground changes, 3D viewpoint changes, blending changes, opacity changes, swapping of background and foreground, swapping between the order of at least two features (e.g. 2D object, 3D object, background, text, and etc.), view point changes, visual changes to at least one layer, and any combination thereof.
The term "special effects" referrers herein to digital illusions used in a film, to simulate the events. Changes in special effects include changes in any special effect characteristics, such as, but not limited to: the appearance of at least one frame or any predetermined number of frames, or sound in reference of at least one frame or any predetermined number of frames. These changes include changes such as split screen, zoom, applying a filter, quality enhancing (color correction, adjust, sharpen, blur, and 'auto-filter'), add a feature (in the background/foreground and/or blended into a layer, such as explosions, smoke, laser lighting, ), special effects for video (chroma key for changing video background, mosaic, old movie/sepia filter, diffuse, add / remove noise, transformation (crop, rotate, distort, flip, Picture in Picture), superimposition of two or more frames and/ or sound tracks and /or videos and/or shots to create a single frame and/ or image and/or film and/or shot, morphing, bullet time, dolly zoom, perspective change, optical effects, and others as known in the art of special effects, and any combination thereof.
Textual changes include changes in any text related characteristic such as but not limited to: the change of at least a portion of a text content, color, size, position, timing, font, and any combination thereof, as part of a frame, as subtitles, as graphical 2D or 3D object, as part of a moving object, and any combination thereof. This can be accomplished by further employing any text recognition module as known in the art in combination with visual comparison between different video versions.
The term "changes information" refers herein after to any data associated with a change between different digital media versions. This information includes, but is not limited to: location of the changes (in the level of timing, shot, frame, pixel), a representation of the appearance before the change, the characteristic changed, the user related to the change, annotation related to the change, feature details, recognition devised feature parameters, feature fingerprint, and any combination thereof.
It is also in the scope of the present invention to recognize the exact change following its detection (as mentioned in all of the above), and the object/ feature related to it. This will allow generating and/or updating comments accordingly, tracking of the feature/ object changed along the film by tagging it with a comment, optionally while pointing the change points, so that the comment would be transferred to all of its derivatives, if required, in the following revisions, and optionally provide lineage to their source material.
The comparison of versions (and optionally inspection of the source material) provided by the present invention is employed to simplify the recognition process.
The recognition process would significantly improve the overall system by providing capabilities such as: -Detecting a certain feature that was changed. This occurs when a certain scene undergoes a significant change such as 3D model/animation/texture/lighting, Camera angle/animation or matte painting changes.
-Detecting significant FX changes. This occurs when a certain scene undergoes a significant change such as 2D Motion graphics animation, color correction or overall composition (The position of 2D elements in the scene) changes. These changes can also include text changes such as subtitles.
Recognizing the elements that are the source of every final footage pixel. Determining and storing such lineage relation, allows to make extraordinary matching between different versions, despite of significant changes in lighting, color management, frame setup, frame ordering, and other significant visual changes to the final footage. For example, changing a hat to a different hat, corresponds to a significantly different visual representation in the final footage pixel level, however the lineage relation, can track back the pixel in the previous revision to its originating element (the hat), and search for pixels which were rendered from this element on the next revision.
A lineage to the source material can be created by using plugins to the editing software, which examines the composite and mask details for each layer, during the compositing. This would allow to create a reasonably dense pixel assignment to originating elements, and thus store the lineage relation.
A video production is a process of content transformations. Taking 3D Objects, animation scripts, camera footage, titles, and other elements, and rendering them together to generate the final video content. Therefore, the transformation from the final video content back to the original elements, is not trivial. The present invention further provides generation of especially designed lineage from the elements to their final rendition so that any references to areas in the final frame, can be traced back to their original elements. This lineage relation between elements and final frame, is indifferent to the originating encoding, video production system, format, and source material.
The lineage generation process begins after locating and recognizing changes between at least two media versions. The changes are for example logged in a retrievable manner (in such as a dedicated database or file) and referenced to the specific reviewed version generated. The changes logged are such as: the change itself, the feature changed before application of change (as in the first digital media file), the timing on the timeline of the change portion, the coordinates of changed portion, the date and time of generation of the reviewed digital media file, the date and time of each received digital media file, the user details of each file, a small icon graphically representing the change (such as a portion of the changed image in its previous appearance), any comments or annotations added to the change previously and are supplied in connection to the file. Following this log, when receiving a new version of a digital media file and performing a new reviewing process of detecting and recognizing changes, consequently generating a new reviewed digital media file. The new located and recognized changed features are logged in the same database of file. Importantly, a matching is made between the different features in the previous logging and the new logging such that they are indexed or tagged sequentially as belonging to the same feature in a specific order. The accumulating changes logged can be presented in such as a separate file, as a task list, a visual representation on a digital media file or on the reviewed media file. Additionally or alternatively the hierarchy of changes can be presented in layers on the actual related frames, and/ or as links to presenting a change. Additionally or alternatively, the changes, whether the updated changes or the entire hierarchy of changes, can be presented in a dedicated user interface in connection to the presentation of the media file (in such as an on screen tile / menu / widget/ icon/ link, still image/ configuration). Additionally or alternatively, the changes can be presented layered on-top at least a portion of the related frame. Additionally or alternatively, the changes are configured to be moved by for example dragging and dropping to another portion of the digital media file (whether in the specific frame and/or into another frame(s) and/or into another file).
It is further in the scope of the present invention that location and recognition capabilities provide the following functionality during the version comparison:
• Transferring comments from previous version to new one;
• Marking comments to review, after the object of a comment is changed;
• Transferring comments of an object to its appearances within the shot;
• Presenting differences between arbitrary versions, allowing to realize what was changed. The present invention solution is based on a two stage approach, the first stage is locating the changes, then recognition of the changes. Professional editing software programs known in the art provide recording the editor's decisions in an edit decision list (EDL) that is exportable to other editing tools. The present invention additionally provides generation of a list of annotation as a task list, as a metadata file transferred with the digital media file. This list can provide features including: editing of the task list; assigning different priorities to the tasks (that can be manifested by for example different graphical representation); deleting/ adding an annotation; searching for an annotation by feature type, time of annotation, portion of time line it appears, author, and etc. The annotation can further include but not limited to, comments (in any combination comprising at least one of: text, image, icon, link), links to other locations in the same or different digital media files, links to other annotations, links to external programs and/or websites, images; image, icons, link and /or text representing a recognized feature and /or representing detected change.
It is further in the scope of the present invention to provide a method for locating changes between two versions of a film. Locating the changes relies on creating a correspondence between frames (and areas within the frame) in two sequential versions. This is accomplished by including for example the following steps:
a) Finding Shot separators;
b) Determining shot correspondence;
c) Determining frame correspondence within matched shots;
d) Determining area correspondence within matched frames; and
e) Matching sound correspondence
Examples for preforming the above mentioned steps:
Finding shot separators is done for example by using common methods for automatic shot detection, such as shot detection mechanism, which is based on examining two N apart frames (typically N=3, and optionally N=4, N=5, N>3, N>3, N>4, N>5, 5>N>3). The difference between the frames is measured using pixel-wise distance, and a GIST descriptor to speed up the comparison, and to provide a more robust detecting of changes, despite of moving objects, lighting and viewpoints in the video content. Large difference between the two frames, indicate the transformation to a new shot. To remove false detection in shots with high camera motion, we use a dynamic threshold algorithm, which adjust the threshold according to other close-by frame differences. This method both localizes the shot transformation, and is sufficiently robust to withstand different shot transitions, and compression originated differences.
Determining shot correspondence is done for example by the following steps: defining a shot as a sequence of frames, between two consecutive detected shot separators; characterizing each shot according to a descriptor of its first and last frame. The descriptor is comprised on the GIST descriptor of these frames, together with their pixel details (pixel values, and color histograms).
In order to match shots, bipartite matching algorithm (known in the art as the Hungarian algo) is used, which matches between the different shots. To discourage shot shuffling over time, an iterative hierarchical approach is applied which first determines matching only with a good fit of the shot signatures, and after that stage, encourages shots matching which follow the same linear order of the two revisions. For example, assuming revision A with shots Al, A2, A3, and revision B with shots Bl, B2, B3 assuming there's a good match between Al and Bl, and A3 and B3 on the first iteration. This encourages the match of A2 to B2 on the following iterations, even with worse matching distance between the two. To enrich the shot signature, and provide a robust solution we also use the shot frame length, and their current location within the final content, together with additional shot frames statistics, such as color histograms.
Determine frame correspondence within matched shots is accomplished by examining the matching shots to find an exact correspondence. The set of shot separators is used as a bases for an adaptation of an iterative dynamic time warping (DTW) algorithm. This algorithm, finds the best frame correspondence which considers also their sequence. To speed up the calculation of the DTW, a Sakoe-Chiba band is optionally used during the calculation.
This method also allows to detect and match cases where frames have been added/deleted, or changed their speed.
Determining area correspondence within matched frames is accomplished by for example methods using inter and intra frame patch distances to build a dynamic threshold which is both robust, and sensitive enough for changes in the signatures. Optionally, utilizing use an advanced data structures such as quad-tree, or Minihash for storing and comparing the feature descriptors to provide a fast and compact storage.
Specifically, applying to each frame an invariant feature detectors + descriptors (both area features and point features), such as SIFT or HOG. The combination of these features often referred to as bag-of-features within a rectangular patch as a signature of the patch. The signature enable the detection of a similar patches in the corresponding frames (of the other version), and matching similar patches within the following /preceding frames (of the same version). To detect the matching area of a given patch, we apply a set of matching methods, according to an increasing computational effort, when a match is found, the process stops. When not match is found the next example can be used:
-This example refers to the frame in the current revision as frame A and the frame in the previous revision as frame B.
a) Attempt to match the rectangular patch in frame A, to a rectangular patch in frame B, using a naive pixel distance.
b) Attempt to apply template matching of the pixel values (a convolution of the patch in frame A, onto frame B). This method accommodate for movements within the frame, with minimal change in appearance.
c) Attempt to apply bag-of-features matching. Search for similar regions in frame B which correspond to the signature of the patch in frame A. This method is more robust to changes in the appearance, slight rotation and movement of the patch.
d) Additionally or alternatively, we use CV trackers (such as Median-flow, or similar) to discover the patch match between frame A and frame B, by allowing the tracker to track the patch to the nth frame after frame A (or backward). Then we extract the signature of the patch in that frame, and locate it in the nth frame after frame B. If found, apply the tracker to track the detected patch back n frame, and the resulting patch of this tracking process is the patch match.
Matching sound correspondence is used in parallel and/ or at least partially sequential to the processes described above. For example a DTW on the audio track frequency spectrum is applied, to define a matching between audio signals. According to user specifications, it is possible to either enforce matching of the frame matching to the audio matching, or allow both to co-exist in cases where they do not agree on a match. Audio matching can begin by matching of the sound fingerprint of at least a portion of the soundtrack (e.g. a finger print based on time, frequency, noise and intensity of sound signal) while allowing at least one free parameter such as a range in time to allow location flexibility in changed items. The parameters can be kept in a Hash table and compared. Another option is to allow no free parameters but to find areas that match above a certain predefined threshold of matching point of the fingerprint. Other known in the art means can be overlap-analysis detects overlap in two audio files; waveform-compare; sound-match detects occurrences of a smaller audio file within a larger audio file, and utilizing signal processing technics such as cross-correlation (a measure of similarity of two series as a function of the lag of one relative to the other), matching the dynamic range of the decibel and/ or frequency at least a portion of the audio as outputted by the editing and rendering audio system, comparison of a defined harmony, a defined rhythm or beat, identifying and comparing soundless frames and/ or shots.
Reference is now made to Fig. 1A schematically representing in an out of scale manner an embodiment of the invention. According to one embodiment of the present invention a system (100), useful for reviewing a plurality of sequential Digital Media Files, (DMF), comprising: (a) a receiving module (110), configured to receive at least one first DMF and at least one second DMF; (b) an analysis module (120), configured to detect changes between at least one first DMF and one second DMF; (c) a recognition module (130), configured to recognize at least one feature comprising one or more the detected changes along the DMF; and, (d) a non-transitory computer readable storage medium (CRM) (150) operatively in communication with the receiving module (110), the analysis module (120) and the recognition module (130), the CRM (150) having computer executable instructions that configure one or more operatively coupled processors to perform the instructions comprising: (i) receive at least one first and at least one second DMFs by means of the receiving module; (ii) compare the visual and/ or audio content of each of the DMF and detect visual and/or audio changes between the DMFs by means of the analysis module; (iii) recognize features comprising the detected changes by means of the recognition module; (iv) log the detected changes information and the recognized features comprising changes information; (v) generate at least one reviewed digital media file (RDMF) of the second DMF configured to present at least a portion the detected changes and/or changed feature recognition in comparison to at least one first DMF; and, (iv)forward the at least one RDMF to at least one recipient; wherein the system is configured to generate at least one RDMF for every DMF version received in comparison to at least one previous RDMF; further wherein the system is configured to generate at least one file associated with the RDMF comprising sequential hierarchy and/ or history of detected changes and/ or the recognized features between different sequential DMFs.
According to another embodiment of the present invention, a system as described above is disclosed, wherein the system is configured to present at least a portion of the changed features hierarchy and/or history in a selected from a group consisting of: at least a portion of a user interface, in an external editing software, graphically presented in the RDMF, in a separate file, in a file configured as task list, and any combination thereof.
According to another embodiment of the present invention, a system as described above is disclosed, wherein the analysis module is configured to detect visual and/ or sound changes between the DMFs at the level of: the entire DMFs, at least a portion of the DMFs, the DMF shot level, the DMF frame level, the DMF shot and or audio track transition level, and any combination thereof.
According to another embodiment of the present invention, a system as described above is disclosed, wherein the recognition module is configured to generate at least one fingerprint of a recognized feature comprising tolerances of feature characteristics enabling recognition.
According to another embodiment of the present invention, a system as described above is disclosed, wherein the processor is configured to relate at least a portion of the information of at least one detected change to all of the changed feature occurrences recognized by the recognition module in the same the DMF.
According to another embodiment of the present invention, a system as described above is disclosed, wherein the processor is configured to search and optionally associate an annotation and/ or at least a portion of the changed information, to a feature in the DMF the RDMF, or both.
According to another embodiment of the present invention, a system as described above is disclosed, wherein the system is configured to generate at least one retrievable memory log comprising hierarchy and/ or history of detected changes between a plurality of sequential DMFs.
According to another embodiment of the present invention, a system as described above is disclosed, wherein the system is configured to generate at least one retrievable memory log comprising hierarchy and/ or history of recognized features comprising changes between a plurality of sequential DMFs. According to another embodiment of the present invention, a system as described above is disclosed, wherein the system is configured to generate at least one retrievable memory log comprising details of recognized features of at least one DMF.
According to another embodiment of the present invention, a method as described above is disclosed, wherein the analysis module is configured to perform at least one of the following instructions: (a) determine shot separation within each DMF; (b) determine the location along the timeline of each shot; (c) determine the time length of each shot; (d) determine the visual characteristics of each DMF frame; (e) determine the sound characteristics of each DMF frame; (f) determine at least one shot transitions characteristics of each DMF; (g) determine the time length of the entire DMF for each DMF; (h) match the frames, shots, or both between at least one first DMF and one second DMF; (i) determine frame correspondence between at least one first DMF frame and at least one second DMF frame; (j) determine shot correspondence between at least one first DMF shot and at least one second DMF shots; (k) determine sound characteristic correspondence between at least one first DMF and at least one second said DMF frames, shots, or both; (1) determine visual characteristics correspondence between at least one first said DMF and at least one second said DMF frames, shots, or both; (m) determine shot transition characteristics correspondence between at least one first DMF and at least one second DMF; (n) determine timing characteristics correspondence between at least one first DMF and at least one second DMF; and, (o) any combination thereof. .
According to another embodiment of the present invention, a method as described above is disclosed, wherein the analysis module further comprises at least one matching module operatively in communication with the CRM, configured to match corresponding portions of the DMFs; the CRM further comprising computer executable instructions that configure one or more to perform the instructions comprising: (a) receiving at least one first and at least one second DMFs by the receiving module; (b) distinguishing between the different shots to determine shot separators by the matching module in each DMF; (c) determining shot correspondence between the shots in the first DMF and the second DMF, to determine matched shots by the matching module; (d) determining frame correspondence between the frames in the first DMF and second DMF, within the matched shots to determine matched frames by the matching module; (e) detect differences between the DMFs in level selected from a group consisting of: entire DMF level, frame level, shot level, timing level, transition level, sound level and any combination thereof, by the analysis module; and, (f) logging the detected changes and/ or detected features comprising the changes.
According to another embodiment of the present invention, a system as described above is disclosed, wherein the processor is configured to graphically present at least a portion of the detected changes information visually in the RDMF;
According to another embodiment of the present invention, a system as described above is disclosed, wherein the processor is configured to present the changes graphically in the RDMF at occurrences of the changed feature selected from a group consisting of: all occurrences of the changed feature, user defined occurrences of the changed feature, from the location of the change and onward along the RDMF timeline, on occurrences of the changed feature in derivative versions of the RDMF, and any combination thereof.
According to another embodiment of the present invention, a system as described above is disclosed, wherein the system further comprises at least one annotation module operatively in communication with the processor, the annotation module is configured to generate at least one annotation in association with at least one DMF feature.
According to another embodiment of the present invention, a system as described above is disclosed, wherein the annotation module is configured to generate one or more of the annotations in a manner selected from a group consisting of: automatically, manually, according to user defined criteria, and any combination thereof.
According to another embodiment of the present invention, a system as described above is disclosed, wherein the annotation module is configured to accept one or more annotations from at least one external source and associate them with defined features in the RDMF.
According to another embodiment of the present invention, a system as described above is disclosed, further comprising an annotation module configured to associate at least one annotation with at least one portion of the RDMF.
According to another embodiment of the present invention, a system as described above is disclosed, wherein the annotation module is configured to generate annotations for one or more detected changes in the RDMF. According to another embodiment of the present invention, a system as described above is disclosed, wherein each generated annotation of the detected change comprises at least a portion of detected changed information.
According to another embodiment of the present invention, a system as described above is disclosed wherein the annotation module is further configured to index at least one detected changes, index at least one features comprising detected changes, or both.
According to another embodiment of the present invention, a system as described above is disclosed, wherein the annotations comprising at least one selected from a group consisting of: the logged changes, the recognized features comprising changes, the changes information, indexing of the annotation content, comment(s) inserted by at least one user, link(s), images, changes history, and any combination thereof.
According to another embodiment of the present invention, a system as described above is disclosed, wherein the annotation module is configured to edit at least one annotation by performing at least one selected from a group consisting of: adding annotation, deleting annotation, changing annotation details, manipulating graphical representation of the annotation, editing annotation content, editing annotation location on screen, editing annotation location along timeline, edit annotation image, connecting the annotation(s) to at least one second feature, and any combination thereof.
According to another embodiment of the present invention, a system as described above is disclosed, wherein the annotation module is further configured to allow a user by means of a user interface, to edit at least one of the annotations.
According to another embodiment of the present invention, a system as described above is disclosed, wherein the annotation module is configured to generate differential graphical representation of annotations relating to their version source and/ or their place in the changes hierarchy.
According to another embodiment of the present invention, a system as described above is disclosed, wherein the annotation module is configured to generate a task list comprising one or more of the annotations;
According to another embodiment of the present invention, a system as described above is disclosed, wherein the annotation module is configured to be updated following at least one event selected from a group consisting of: detecting a change in the DMF, editing of the annotation by at least one user, and any combination thereof.
According to another embodiment of the present invention, a system as described above is disclosed, wherein the annotation module is configured to forward the task list to at least one third party;
According to another embodiment of the present invention, a system as described above is disclosed, wherein the annotation module is configured to distribute a different set the annotations into different final reviewed digital media files (RDMFs) according to user defined criteria.
According to another embodiment of the present invention, a system as described above is disclosed, wherein the annotation module is configured to index the annotations according to at least one selected from a group consisting of: user defined information, the changes information, the feature comprising change, DMF version, timing within DMF, the annotation generation date, the annotation editing date, the annotation tag associated by a user, and any combination thereof.
According to another embodiment of the present invention, a system as described above is disclosed, wherein the processor is configured to search annotations according to at least one annotation indexing.
According to another embodiment of the present invention, a system as described above is disclosed, wherein the system is configured to generate a history database of the detected changes information in at least one DMF in comparison to at least one other DMF selected from a group consisting of: all received DMF versions, at least one second DMF, a user selected DMF, a most recently received DMF, the first received DMF, at least a portion of a selected DMF, and any combination thereof.
According to another embodiment of the present invention, a system as described above is disclosed, wherein the processor is further configured to generate at least one DMF by: combining at least a portion of different DMFs, combining at least one first and at least one second portion of the same DMF, extract at least one portion from the DMF, and any combination thereof. According to another embodiment of the present invention, a system as described above is disclosed, wherein the processor is further configured to generate at least one RDMF by: combining at least a portion of different RDMFs, combining at least one first and at least one second portion of the same RDMF, extract at least one portion from the RDMF, and any combination thereof..
According to another embodiment of the present invention, a system as described above is disclosed, wherein the system further comprises at least one rendering module configured to render the RDMF according to user defined criteria.
According to another embodiment of the present invention, a system as described above is disclosed, wherein the receiving module is configured to accept the DMF in a rendered form.
According to another embodiment of the present invention, a system as described above is disclosed, wherein the receiving module is configured to accept the DMF in a compressed form and extract the rendered form.
Reference is now made to Fig. IB, schematically representing in an out of scale manner an embodiment of the invention (105). According to this embodiment the system (100) of Fig. 1A further comprising at least one reviewed media file generating module (140) configured to render a file of the latest version including a representation of at least one of: one or more detected changes, one or more features comprising at least one detected change, an annotation with a comment, text image and/or link. Additionally or alternatively, the reviewed file is associated with a metafile comprising annotation data. Additionally or alternatively, the digital media file is associated with an add-in (plug in) to any media editing program providing access to the annotated information and optionally to a metadata file within another program. The decision on how to export the data obtained by the system (and/ or logged by the system) can be done by an export module (180). The export module can optionally render the file in any of the file formats known in the art, and/ or compress the file. The system (105) further comprises an annotation module (160) configured to provide an annotation to at least one change or feature comprising change. Additionally or alternatively, one or more annotations can be imported from an external source and implemented in the file. Additionally or alternatively the annotations can be edited, added to, deleted, or distributed together or separately from the resulting file. The system is further configured to accept a metafile comprising at least one annotation by said receiving module. For example the collaborative application is executed on a server, and accessed through a client application such as a browser on a computing device or hand held smart device,
The data received from the server based application includes at least one of: a metadata file comprising detected changes, a rendered version of the digital media file associated to a metadata file, a rendered version of the digital media file with the metadata displayed within the digital media file, a rendered version of the digital media file comprising a plug-in and or add on that enables viewing and or editing of the metadata; a rendered version of the digital media file comprising annotations and/or association to an annotation database or file, and any combination thereof.
The annotations comprising at least one of: comments, notification of changes, indexes of changes, location of changes , detected features of detected changes, one or more characteristic in with the change is manifested, change date, change provider, link to another location in the digital media, link to an external program and/or memory, and any combination thereof.
Reference is now made to Fig. 2, schematically representing in an out of scale manner an embodiment (200) of the invention. According to one embodiment of the present invention a computer-readable storage medium having stored therein a computer program loadable into a processor of a communication system; the communication system comprising a communication network attached to one or more end users; wherein the computer program comprises code adapted to perform a method for reviewing a plurality of sequential Digital Media Files (DMFs); the method comprising: (a) receiving at least one first and at least one second DMFs from at least one user (210); (b) comparing the visual and/ or audio content of each of the DMFs and detect visual and/or audio changes between the DMFs by means of the analysis module (220); (c) recognizing one or more features comprising the detected changes by means of the recognition module (230); (d) logging the detected changes information and the recognition information (240); (e) generating at least one reviewed digital media file (RDMF) comprising the second DMF configured to present at least a portion the detected changes information and the feature recognition in comparison to at least one first DMF (250); and, (f) forwarding the at least one RDMF to at least one recipient (260); (g) wherein the step (e) additionally comprising the steps of: (a) generating at least one RDMF for every DMF version received in comparison to at least one previous RDMF; and, (b) adding the newly generated changed information to previous the changes information thereby generating a RDMF comprising sequential hierarchy and/ or history of detected changes and/ or the changed feature recognition between different sequential DMFs. According to another embodiment of the present invention, a method as described above is disclosed, additionally comprising the step of presenting at least a portion of the changed features hierarchy and/or history in a selected from a group consisting of: at least a portion of a user interface, in an external editing software, graphically presented in the RDMF, in a separate file, in a file configured as task list, and any combination thereof.
According to another embodiment of the present invention, a method as described above is disclosed, additionally comprising the step of the analysis module detecting visual and/ or sound changes between the DMFs at the level of: the entire DMFs, at least a portion of the DMFs, the DMF shot level, the DMF frame level, the DMF shot and or audio track transition level, and any combination thereof.
According to another embodiment of the present invention, a method as described above is disclosed, additionally comprising the step of the recognition module generating at least one fingerprint of a recognized feature comprising tolerances of feature characteristics enabling recognition.
According to another embodiment of the present invention, a method as described above is disclosed, additionally comprising the step of relating at least a portion of the information of at least one detected change to all of the changed feature occurrences recognized by the recognition module in the same the DMF.
According to another embodiment of the present invention, a method as described above is disclosed, additionally comprising the step of searching and optionally associating an annotation and/ or at least a portion of the changed information, to a feature in the DMF the RDMF, or both.
According to another embodiment of the present invention, a method as described above is disclosed, additionally comprising the step of generating at least one retrievable memory log comprising hierarchy and/ or history of detected changes between a plurality of sequential DMFs.
According to another embodiment of the present invention, a method as described above is disclosed, additionally comprising the step of generating at least one retrievable memory log comprising hierarchy and/ or history of recognized features comprising changes between a plurality of sequential DMFs.
According to another embodiment of the present invention, a method as described above is disclosed, additionally comprising the step of generating at least one retrievable memory log comprising details of recognized features of at least one DMF.
According to another embodiment of the present invention, a method as described above is disclosed, further wherein the step of comparing of the visual and/or audio content of each DMFs follows performing at least one of: (a) determining shot separation within each DMF; (b) determining the location along the timeline of each shot; (c) determining the time length of each shot; (d) determining the visual characteristics of each DMF frame; (e) determining the sound characteristics of each DMF frame; (f) determining at least one shot transitions characteristics of each DMF; (g) determining the time length of the entire DMF for each DMF; (h) matching the frames, shots, or both between at least one first DMF and one second DMF; (i) determining frame correspondence between at least one first DMF frame and at least one second DMF frame; (j) determining shot correspondence between at least one first DMF shot and at least one second DMF shots; (k) determining sound characteristic correspondence between at least one first DMF and at least one second said DMF frames, shots, or both; (1) determining visual characteristics correspondence between at least one first said DMF and at least one second said DMF frames, shots, or both; (m) determining shot transition characteristics correspondence between at least one first DMF and at least one second DMF; (n) determining timing characteristics correspondence between at least one first DMF and at least one second DMF; and, (o) any combination thereof.
According to another embodiment of the present invention, a method as described above is disclosed, wherein the step of detecting changes additionally comprises the steps of: (a) distinguishing between the different shots to determine shot separators in each DMF; (b) determining shot correspondence between the shots in the first DMF and second DMF, to determine matched shots; (c) determining frame correspondence between the frames in the first DMF and second DMF, within the matched shots to determine matched frames; and, (d) detecting differences between the DMFs in level selected from a group consisting of: entire DMF level, frame level, shot level, timing level, transition level, sound level and any combination thereof, according to the correspondence. According to another embodiment of the present invention, a method as described above is disclosed, additionally comprising the step of graphically presenting at least a portion of the detected changes information visually in the RDMF.
According to another embodiment of the present invention, a method as described above is disclosed, additionally comprising the step of presenting the changes graphically in the RDMF at occurrences of the changed feature selected from a group consisting of: all occurrences of the changed feature, user defined occurrences of the changed feature, from the location of the change and onward along the RDMF timeline, on occurrences of the changed feature in derivative versions of the RDMF, and any combination thereof.
According to another embodiment of the present invention, a method as described above is disclosed, additionally comprising the step of further providing the system with at least one annotation module operatively in communication with the processor, the annotation module is configured to generate at least one annotation in association with at least one DMF feature.
According to another embodiment of the present invention, a method as described above is disclosed, additionally comprising the step of the annotation module generating one or more of the annotations in a manner selected from a group consisting of: automatically, manually, according to user defined criteria, and any combination thereof.
According to another embodiment of the present invention, a method as described above is disclosed, additionally comprising the step of the annotation module accepting one or more annotations from at least one external source and associating them with defined features in the RDMF.
According to another embodiment of the present invention, a method as described above is disclosed, additionally comprising the step of the annotation module associating at least one annotation with at least one portion of the RDMF.
According to another embodiment of the present invention, a method as described above is disclosed, additionally comprising the step of the annotation generating annotations for one or more detected changes in the RDMF.
According to another embodiment of the present invention, a method as described above is disclosed, additionally comprising the step of each generated annotation of the detected change comprises at least a portion of detected changed information. According to another embodiment of the present invention, a method as described above is disclosed, additionally comprising the step of the annotation module indexing at least one detected changes, indexing at least one features comprising detected changes, or both.
According to another embodiment of the present invention, a method as described above is disclosed, wherein the annotations comprising at least one selected from a group consisting of: the logged changes, the recognized features comprising changes, the changes information, indexing of the annotation content, comment(s) inserted by at least one user, link(s), images, changes history, and any combination thereof.
According to another embodiment of the present invention, a method as described above is disclosed, additionally comprising the step of: the annotation module editing at least one annotation by performing at least one selected from a group consisting of: adding annotation, deleting annotation, changing annotation details, manipulating graphical representation of the annotation, editing annotation content, editing annotation location on screen, editing annotation location along timeline, edit annotation image, connecting the annotation(s) to at least one second feature, and any combination thereof.
According to another embodiment of the present invention, a method as described above is disclosed, additionally comprising the step of the annotation module enabling a user by means of a user interface, to edit at least one of the annotations.
According to another embodiment of the present invention, a method as described above is disclosed, additionally comprising the step of the annotation module generating differential graphical representation of annotations relating to their version source and/ or their place in the changes hierarchy.
According to another embodiment of the present invention, a method as described above is disclosed, additionally comprising the step of the annotation module generating a task list comprising one or more of the annotations;
According to another embodiment of the present invention, a method as described above is disclosed, additionally comprising the step of the annotation module updating the task list following at least one event selected from a group consisting of: detecting a change in the DMF, editing of the annotation by at least one user, and any combination thereof. According to another embodiment of the present invention, a method as described above is disclosed, additionally comprising the step of the annotation module forwarding the task list to at least one third party;
According to another embodiment of the present invention, a method as described above is disclosed, additionally comprising the step of the annotation module distributing a different set the annotations into different final reviewed digital media files (RDMFs) according to user defined criteria.
According to another embodiment of the present invention, a method as described above is disclosed, additionally comprising the step of the annotation module indexing the annotations according to at least one selected from a group consisting of: user defined information, the changes information, the feature comprising change, DMF version, timing within DMF, the annotation generation date, the annotation editing date, the annotation tag associated by a user, and any combination thereof.
According to another embodiment of the present invention, a method as described above is disclosed, additionally comprising the step of the processor searching the annotations according to at least one annotation indexing.
According to another embodiment of the present invention, a method as described above is disclosed, additionally comprising the step of the system generating a history database of the detected changes information in at least one DMF in comparison to at least one other DMF selected from a group consisting of: all received DMF versions, at least one second DMF, a user selected DMF, a most recently received DMF, the first received DMF, at least a portion of a selected DMF, and any combination thereof.
According to another embodiment of the present invention, a method as described above is disclosed, additionally comprising the step of the processor generating at least one DMF by: combining at least a portion of different DMFs, combining at least one first and at least one second portion of the same DMF, extract at least one portion from the DMF, and any combination thereof.
According to another embodiment of the present invention, a method as described above is disclosed, additionally comprising the step of the processor generating at least one RDMF by: combining at least a portion of different RDMFs, combining at least one first and at least one second portion of the same RDMF, extract at least one portion from the RDMF, and any combination thereof..
According to another embodiment of the present invention, a method as described above is disclosed, additionally comprising the step of providing the system further comprising at least one rendering module configured to render the RDMF according to user defined criteria.
According to another embodiment of the present invention, a method as described above is disclosed, additionally comprising the step of the receiving module accepting the DMF in a rendered form.
According to another embodiment of the present invention, a method as described above is disclosed, additionally comprising the step of the receiving module accepting the DMF in a compressed form and extracting the rendered form.
Reference is now made to Fig. 3, schematically representing in an out of scale manner an embodiment (300) of the invention. According to this embodiment of the present invention a computer implemented method for reviewing of Digital Media Files (DMF) received from at least one user-end (365, 360), via a communication system (370), to at least one server computer (310) comprising the steps of: (a) providing: (i) the server comprising at least one memory storage (325) operatively coupled with at least one processor (320); (ii) at least one receiving module (330) operatively in communication with the server (310) configured to receive first and at least one second DMFs from at least one user; (iii) at least one analysis module (340) operatively in communication with the server configured to detect visual and/or audio changes between the DMFs; and, (iv) at least one recognition module (350) operatively in communication with the server configured to recognize features comprising changes between the DMFs; (b) receiving at least one first and at least one second DMFs by means of the receiving module from at least one user end (365,360); (c) comparing the visual and/ or audio content of each of the DMFs and detect visual and/or sound changes between the DMFs by means of the analysis module; (d) recognizing features comprising the detected changes by means of the recognition module; (e) logging the detected changes information and the recognition information; (f) generating at least one reviewed digital media file (RDMF) comprising the second DMF configured to present at least a portion of the detected changes information and the changed feature recognition in comparison to at least one first DMF; and, (g) forward the at least one RDMF to at least one recipient; wherein the step (e) additionally comprising the steps of: (a) generating at least one RDMF for every DMF version received in comparison to at least one previous RDMF; and, (b) adding the newly generated changed information to previous the changes information thereby generating a RDMF comprising sequential hierarchy and/ or history of detected changes and/ or the recognized features between different sequential DMFs.
According to another embodiment of the present invention, a method as described above is disclosed, additionally comprising the step of presenting at least a portion of the changed features hierarchy and/or history in a selected from a group consisting of: at least a portion of a user interface, in an external editing software, graphically presented in the RDMF, in a separate file, in a file configured as task list, and any combination thereof.
According to another embodiment of the present invention, a method as described above is disclosed, additionally comprising the step of the analysis module detecting visual and/ or sound changes between the DMFs at the level of: the entire DMFs, at least a portion of the DMFs, the DMF shot level, the DMF frame level, the DMF shot and or audio track transition level, and any combination thereof.
According to another embodiment of the present invention, a method as described above is disclosed, additionally comprising the step of the recognition module generating at least one fingerprint of a recognized feature comprising tolerances of feature characteristics enabling recognition.
According to another embodiment of the present invention, a method as described above is disclosed, additionally comprising the step of relating at least a portion of the information of at least one detected change to all of the changed feature occurrences recognized by the recognition module in the same the DMF.
According to another embodiment of the present invention, a method as described above is disclosed, additionally comprising the step of searching and optionally associating an annotation and/ or at least a portion of the changed information, to a feature in the DMF the RDMF, or both.
According to another embodiment of the present invention, a method as described above is disclosed, additionally comprising the step of generating at least one retrievable memory log comprising hierarchy and/ or history of detected changes between a plurality of sequential DMFs. According to another embodiment of the present invention, a method as described above is disclosed, additionally comprising the step of generating at least one retrievable memory log comprising hierarchy and/ or history of recognized features comprising changes between a plurality of sequential DMFs.
According to another embodiment of the present invention, a method as described above is disclosed, additionally comprising the step of generating at least one retrievable memory log comprising details of recognized features of at least one DMF.
According to another embodiment of the present invention, a method as described above is disclosed, additionally comprising the step of the analysis module further performing at least one of the following instructions: (a) determining shot separation within each DMF; (b) determining the location along the timeline of each shot; (c) determining the time length of each shot; (d) determining the visual characteristics of each DMF frame; (e) determining the sound characteristics of each DMF frame; (f) determining at least one shot transitions characteristics of each DMF; (g) determining the time length of the entire DMF for each DMF; (h) matching the frames, shots, or both between at least one first DMF and one second DMF; (i) determining frame correspondence between at least one first DMF frame and at least one second DMF frame; (j) determining shot correspondence between at least one first DMF shot and at least one second DMF shots; (k) determining sound characteristic correspondence between at least one first DMF and at least one second said DMF frames, shots, or both; (1) determining visual characteristics correspondence between at least one first said DMF and at least one second said DMF frames, shots, or both; (m) determining shot transition characteristics correspondence between at least one first DMF and at least one second DMF; (n) determining timing characteristics correspondence between at least one first DMF and at least one second DMF; and, (o) any combination thereof.
According to another embodiment of the present invention, the step of detecting changes by the analysis module additionally comprising the steps of: further providing at least one matching module, operatively in communication with the server, configured to match corresponding portions of the DMFs; further wherein the server to perform the instructions comprising: (a) receive at least one first and at least one second DMFs by the receiving module; (b) distinguish between the different shots to determine shot separators by the matching module in each DMF ; (c) determine shot correspondence between the shots in the first DMF and second DMF, to determine matched shots by the matching module; (d) determine frame correspondence between the frames in the first DMF and second DMF, within the matched shots to determine matched frames by the matching module; (e) detect differences between the DMFs in level selected from a group consisting of: entire DMF level, frame level, shot level, timing level, transition level, sound level and any combination thereof, by the analysis module; and, (f) log the detected changes and/ or detected features comprising the changes.
According to another embodiment of the present invention, a method as described above is disclosed, additionally comprising the step of graphically presenting at least a portion of the detected changes information visually in the RDMF.
According to another embodiment of the present invention, a method as described above is disclosed, additionally comprising the step of presenting the changes graphically in the RDMF at occurrences of the changed feature selected from a group consisting of: all occurrences of the changed feature, user defined occurrences of the changed feature, from the location of the change and onward along the RDMF timeline, on occurrences of the changed feature in derivative versions of the RDMF, and any combination thereof.
According to another embodiment of the present invention, a method as described above is disclosed, additionally comprising the step of further providing the system with at least one annotation module operatively in communication with the processor, the annotation module is configured to generate at least one annotation in association with at least one DMF feature.
According to another embodiment of the present invention, a method as described above is disclosed, additionally comprising the step of the annotation module generating one or more of the annotations in a manner selected from a group consisting of: automatically, manually, according to user defined criteria, and any combination thereof.
According to another embodiment of the present invention, a method as described above is disclosed, additionally comprising the step of the annotation module accepting one or more annotations from at least one external source and associating them with defined features in the RDMF.
According to another embodiment of the present invention, a method as described above is disclosed, additionally comprising the step of the annotation module associating at least one annotation with at least one portion of the RDMF. According to another embodiment of the present invention, a method as described above is disclosed, additionally comprising the step of the annotation generating annotations for one or more detected changes in the RDMF.
According to another embodiment of the present invention, a method as described above is disclosed, additionally comprising the step of each generated annotation of the detected change comprises at least a portion of detected changed information.
According to another embodiment of the present invention, a method as described above is disclosed, additionally comprising the step of the annotation module indexing at least one detected changes, indexing at least one features comprising detected changes, or both.
According to another embodiment of the present invention, a method as described above is disclosed, wherein the annotations comprising at least one selected from a group consisting of: the logged changes, the recognized features comprising changes, the changes information, indexing of the annotation content, comment(s) inserted by at least one user, link(s), images, changes history, and any combination thereof.
According to another embodiment of the present invention, a method as described above is disclosed, additionally comprising the step of: the annotation module editing at least one annotation by performing at least one selected from a group consisting of: adding annotation, deleting annotation, changing annotation details, manipulating graphical representation of the annotation, editing annotation content, editing annotation location on screen, editing annotation location along timeline, edit annotation image, connecting the annotation(s) to at least one second feature, and any combination thereof.
According to another embodiment of the present invention, a method as described above is disclosed, additionally comprising the step of the annotation module enabling a user by means of a user interface, to edit at least one of the annotations.
According to another embodiment of the present invention, a method as described above is disclosed, additionally comprising the step of the annotation module generating differential graphical representation of annotations relating to their version source and/ or their place in the changes hierarchy. According to another embodiment of the present invention, a method as described above is disclosed, additionally comprising the step of the annotation module generating a task list comprising one or more of the annotations;
According to another embodiment of the present invention, a method as described above is disclosed, additionally comprising the step of the annotation module updating the task list following at least one event selected from a group consisting of: detecting a change in the DMF, editing of the annotation by at least one user, and any combination thereof.
According to another embodiment of the present invention, a method as described above is disclosed, additionally comprising the step of the annotation module forwarding the task list to at least one third party;
According to another embodiment of the present invention, a method as described above is disclosed, additionally comprising the step of the annotation module distributing a different set the annotations into different final reviewed digital media files (RDMFs) according to user defined criteria.
According to another embodiment of the present invention, a method as described above is disclosed, additionally comprising the step of the annotation module indexing the annotations according to at least one selected from a group consisting of: user defined information, the changes information, the feature comprising change, DMF version, timing within DMF, the annotation generation date, the annotation editing date, the annotation tag associated by a user, and any combination thereof
According to another embodiment of the present invention, a method as described above is disclosed, additionally comprising the step of the processor searching the annotations according to at least one annotation indexing.
According to another embodiment of the present invention, a method as described above is disclosed, additionally comprising the step of the system generating a history database of the detected changes information in at least one DMF in comparison to at least one other DMF selected from a group consisting of: all received DMF versions, at least one second DMF, a user selected DMF, a most recently received DMF, the first received DMF, at least a portion of a selected DMF, and any combination thereof. According to another embodiment of the present invention, a method as described above is disclosed, additionally comprising the step of the processor generating at least one DMF by: combining at least a portion of different DMFs, combining at least one first and at least one second portion of the same DMF, extract at least one portion from the DMF, and any combination thereof.
According to another embodiment of the present invention, a method as described above is disclosed, additionally comprising the step of the processor generating at least one RDMF by: combining at least a portion of different RDMFs, combining at least one first and at least one second portion of the same RDMF, extract at least one portion from the RDMF, and any combination thereof..
According to another embodiment of the present invention, a method as described above is disclosed, additionally comprising the step of providing the system further comprising at least one rendering module configured to render the RDMF according to user defined criteria.
According to another embodiment of the present invention, a method as described above is disclosed, additionally comprising the step of the receiving module accepting the DMF in a rendered form.
According to another embodiment of the present invention, a method as described above is disclosed, additionally comprising the step of the receiving module accepting the DMF in a compressed form and extracting the rendered form.
Reference is now made to Fig. 4, schematically representing in an out of scale manner an embodiment of the invention. According to one embodiment of the present invention a computer implemented method (400) for detecting and annotating changes between two sequential versions of Digital Media Files (DMFs). Each of the DMFs comprising one or more shots, each shot comprising one or more frames. Fig 5 represents an example of an embodiment of a system for implementing the method of Fig 4. The method characterized by the steps of: (a) providing: (i) at least one receiving module (Fig. 5, 510) configured to receive first and at least one second DMFs; (ii) at least one matching module (Fig. 5, 540) configured to match corresponding portions of the DMFs; (iii) at least one analysis module (Fig. 5, 520) configured to detect visual and/or audio changes between the DMFs, (iv) at least one recognition module (Fig. 5, 530) configured to recognize features comprising changes between the DMFs; (v) at least one annotation module (Fig. 5, 550) configured to generate at least one annotation; and, (vi) a non- transitory computer readable storage medium (CRM) (Fig. 5, 510), operatively in communication with the receiving module (Fig. 5, 510), the analysis module (Fig. 5, 520), the matching module (Fig. 5, 540), annotation module (Fig. 5, 550) and the recognition module (Fig. 5, 530), the CRM (Fig. 5, 505) having computer executable instructions that configure one or more operatively coupled processors to perform the instructions comprising: (1) receive (Fig. 4, 410) at least one first and at least one second DMFs by the receiving module; (2) distinguish (Fig. 4, 420) between the different shots to determine shot separators by the matching module in each DMF; (3) determine (Fig. 4, 430) shot correspondence between the shots in the first DMF and second DMF, to determine matched shots by the matching module; (4) determine (Fig. 4, 440) frame correspondence between the frames in the first DMF and second DMF, within the matched shots to determine matched frames by the matching module; (5) detect differences (Fig. 4, 440) between the DMFs in level selected from a group consisting of: entire DMF level, frame level, shot level, timing level, transition level, sound level and any combination thereof, by the analysis module; (6) recognize (Fig. 4, 460) one or more features associated with a detected difference, by the recognition module; (7) log the detected changes (Fig. 4, 470) and/ or detected features comprising the changes; (8) generate (Fig. 4, 480) at least one reviewed digital media file (RDMF) comprising the second DMF configured to present at least a portion of the detected changes information and the feature recognition of in comparison to at least one first DMF; and, (9) annotate (Fig. 4, 490) each detected change by the annotation module; further wherein the annotation of detected change feature is transferred to all of the changed feature derivatives in the RDMF and in sequential derivative RDMFs.
According to another embodiment of the present invention, a method as described above is disclosed, wherein the correspondence of a shot and /or a frame comprises correspondence of at least one characteristic selected from a group consisting of: sound characteristics, visual characteristics, timing characteristics, transition characteristics, textual characteristics, 3D object characteristics, 2D object characteristics, special effects characteristics, and any combination thereof.
According to another embodiment of the present invention, a method as described above is disclosed, additionally comprising the step of determining area correspondence within matched frames. According to another embodiment of the present invention, a method as described above is disclosed, additionally comprising the step of presenting at least a portion of the changed features hierarchy and/or history in a selected from a group consisting of: at least a portion of a user interface, in an external editing software, graphically presented in the RDMF, in a separate file, in a file configured as task list, and any combination thereof.
According to another embodiment of the present invention, a method as described above is disclosed, additionally comprising the step of the analysis module detecting visual and/ or sound changes between the DMFs at the level of: the entire DMFs, at least a portion of the DMFs, the DMF shot level, the DMF frame level, the DMF shot and or audio track transition level, and any combination thereof.
According to another embodiment of the present invention, a method as described above is disclosed, additionally comprising the step of the recognition module generating at least one fingerprint of a recognized feature comprising tolerances of feature characteristics enabling recognition.
According to another embodiment of the present invention, a method as described above is disclosed, additionally comprising the step of relating at least a portion of the information of at least one detected change to all of the changed feature occurrences recognized by the recognition module in the same the DMF.
According to another embodiment of the present invention, a method as described above is disclosed, additionally comprising the step of searching and optionally associating an annotation and/ or at least a portion of the changed information, to a feature in the DMF the RDMF, or both.
According to another embodiment of the present invention, a method as described above is disclosed, additionally comprising the step of generating at least one retrievable memory log comprising hierarchy and/ or history of detected changes between pluralities of sequential DMFs.
According to another embodiment of the present invention, a method as described above is disclosed, additionally comprising the step of generating at least one retrievable memory log comprising hierarchy and/ or history of recognized features comprising changes between pluralities of sequential DMFs. According to another embodiment of the present invention, a method as described above is disclosed, additionally comprising the step of generating at least one retrievable memory log comprising details of recognized features of at least one DMF.
According to another embodiment of the present invention, a method as described above is disclosed, additionally comprising the step of the analysis module performing analysis of the visual and/or audio content of each DMFs by performing at least one of the following instructions: (a) determining shot separation within each DMF; (b) determining the location along the timeline of each shot; (c) determining the time length of each shot; (d) determining the visual characteristics of each DMF frame; (e) determining the sound characteristics of each DMF frame; (f) determining at least one shot transitions characteristics of each DMF; (g) determining the time length of the entire DMF for each DMF; (h) matching the frames, shots, or both between at least one first DMF and one second DMF; (i) determining frame correspondence between at least one first DMF frame and at least one second DMF frame; (j) determining shot correspondence between at least one first DMF shot and at least one second DMF shots; (k) determining sound characteristic correspondence between at least one first DMF and at least one second said DMF frames, shots, or both; (1) determining visual characteristics correspondence between at least one first said DMF and at least one second said DMF frames, shots, or both; (m) determining shot transition characteristics correspondence between at least one first DMF and at least one second DMF; (n) determining timing characteristics correspondence between at least one first DMF and at least one second DMF; and, (o) any combination thereof.
According to another embodiment of the present invention, a method as described above is disclosed, additionally comprising the step of graphically presenting at least a portion of the detected changes information visually in the RDMF;
According to another embodiment of the present invention, a method as described above is disclosed, additionally comprising the step of presenting the changes graphically in the RDMF at occurrences of the changed feature selected from a group consisting of: all occurrences of the changed feature, user defined occurrences of the changed feature, from the location of the change and onward along the RDMF timeline, on occurrences of the changed feature in derivative versions of the RDMF, and any combination thereof. According to another embodiment of the present invention, a method as described above is disclosed, additionally comprising the step of the annotation module generating at least one annotation in association with at least one DMF feature.
According to another embodiment of the present invention, a method as described above is disclosed, additionally comprising the step of the annotation module generating one or more of the annotations in a manner selected from a group consisting of: automatically, manually, according to user defined criteria, and any combination thereof.
According to another embodiment of the present invention, a method as described above is disclosed, additionally comprising the step of the annotation module accepting one or more annotations from at least one external source and associating them with defined features in the RDMF.
According to another embodiment of the present invention, a method as described above is disclosed, additionally comprising the step of the annotation module associating at least one annotation with at least one portion of the RDMF.
According to another embodiment of the present invention, a method as described above is disclosed, additionally comprising the step of the annotation generating annotations for one or more detected changes in the RDMF.
According to another embodiment of the present invention, a method as described above is disclosed, additionally comprising the step of each generated annotation of the detected change comprises at least a portion of detected changed information.
According to another embodiment of the present invention, a method as described above is disclosed, additionally comprising the step of the annotation module indexing at least one detected changes, indexing at least one features comprising detected changes, or both.
According to another embodiment of the present invention, a method as described above is disclosed, wherein the annotations comprising at least one selected from a group consisting of: the logged changes, the recognized features comprising changes, the changes information, indexing of the annotation content, comment(s) inserted by at least one user, link(s), images, changes history, and any combination thereof.
According to another embodiment of the present invention, a method as described above is disclosed, additionally comprising the step of: the annotation module editing at least one annotation by performing at least one selected from a group consisting of: adding annotation, deleting annotation, changing annotation details, manipulating graphical representation of the annotation, editing annotation content, editing annotation location on screen, editing annotation location along timeline, edit annotation image, connecting the annotation(s) to at least one second feature, and any combination thereof.
According to another embodiment of the present invention, a method as described above is disclosed, additionally comprising the step of the annotation module enabling a user by means of a user interface, to edit at least one of the annotations.
According to another embodiment of the present invention, a method as described above is disclosed, additionally comprising the step of the annotation module generating differential graphical representation of annotations relating to their version source and/ or their place in the changes hierarchy.
According to another embodiment of the present invention, a method as described above is disclosed, additionally comprising the step of the annotation module generating a task list comprising one or more of the annotations;
According to another embodiment of the present invention, a method as described above is disclosed, additionally comprising the step of the annotation module updating the task list following at least one event selected from a group consisting of: detecting a change in the DMF, editing of the annotation by at least one user, and any combination thereof.
According to another embodiment of the present invention, a method as described above is disclosed, additionally comprising the step of the annotation module forwarding the task list to at least one third party;
According to another embodiment of the present invention, a method as described above is disclosed, additionally comprising the step of the annotation module distributing a different set the annotations into different final reviewed digital media files (RDMFs) according to user defined criteria.
According to another embodiment of the present invention, a method as described above is disclosed, additionally comprising the step of the annotation module indexing the annotations according to at least one selected from a group consisting of: user defined information, the changes information, the feature comprising change, DMF version, timing within DMF, the annotation generation date, the annotation editing date, the annotation tag associated by a user, and any combination thereof.
According to another embodiment of the present invention, a method as described above is disclosed, additionally comprising the step of the processor searching the annotations according to at least one annotation indexing.
According to another embodiment of the present invention, a method as described above is disclosed, additionally comprising the step of the system generating a history database of the detected changes information in at least one DMF in comparison to at least one other DMF selected from a group consisting of: all received DMF versions, at least one second DMF, a user selected DMF, a most recently received DMF, the first received DMF, at least a portion of a selected DMF, and any combination thereof.
According to another embodiment of the present invention, a method as described above is disclosed, additionally comprising the step of the processor generating at least one DMF by: combining at least a portion of different DMFs, combining at least one first and at least one second portion of the same DMF, extract at least one portion from the DMF, and any combination thereof.
According to another embodiment of the present invention, a method as described above is disclosed, additionally comprising the step of the processor generating at least one RDMF by: combining at least a portion of different RDMFs, combining at least one first and at least one second portion of the same RDMF, extract at least one portion from the RDMF, and any combination thereof.
According to another embodiment of the present invention, a method as described above is disclosed, additionally comprising the step of providing the system further comprising at least one rendering module configured to render the RDMF according to user defined criteria.
According to another embodiment of the present invention, a method as described above is disclosed, additionally comprising the step of the receiving module accepting the DMF in a rendered form.
According to another embodiment of the present invention, a method as described above is disclosed, additionally comprising the step of the receiving module accepting the DMF in a compressed form and extracting the rendered form. Reference is now made to Fig. 5, schematically representing in an out of scale manner an embodiment of the invention. According to one embodiment of the present invention a detection and annotation system (500), useful for detecting and annotating changes between two sequential versions of Digital Media Files (DMFs), the DMFs each comprising one or more shots, each shot comprising one or more frames, the system comprises: (a) at least one receiving module (510) operatively in communication with the server configured to receive first and at least one second DMFs; (b) at least one matching module (540) configured to match corresponding portions of the DMFs; (c) at least one analysis module (520) configured to detect visual and/or audio changes between the DMFs; (d) at least one recognition module (530) configured to recognize features comprising changes between the DMFs; (e) at least one annotation module (550) configured to generate at least one annotation; and, (f) a non-transitory computer readable storage medium (CRM) (505) in communication with the receiving module (510), matching module (540), the analysis module (520), the recognition module (530), and the annotation module (550); the CRM, operatively coupled to at least one processors, having computer executable instructions that configure one or more to perform the instructions comprising: (i) receiving at least one first and at least one second DMFs by the receiving module; (ii) distinguishing between the different shots to determine shot separators by the matching module in each DMF; (iii) determining shot correspondence between the shots in the first DMF and second DMF, to determine matched shots by the matching module; (iv) determining frame correspondence between the frames in the first DMF and second DMF, within the matched shots to determine matched frames by the matching module; (v) detect differences between the DMFs in level selected from a group consisting of: entire DMF level, frame level, shot level, timing level, transition level, sound level and any combination thereof, by the analysis module; (vi) recognizing one or more features associated with a detected difference, by the recognition module; (vii) logging the detected changes and/ or detected features comprising the changes; (viii) generating at least one reviewed digital media file (RDMF) comprising the second DMF configured to present at least a portion of the detected changes information and the feature recognition of in comparison to at least one first DMF; (ix) annotating each detected change by the annotation module; further wherein the annotation of detected change feature is transferred to all of the changed feature derivative files in the DMF and sequential DMFs. According to another embodiment of the present invention, a system as described above is disclosed, wherein the correspondence of a shot and /or a frame comprises correspondence of at least one characteristic selected from a group consisting of: sound characteristics, visual characteristics, timing characteristics, transition characteristics, textual characteristics, 3D object characteristics, 2D object characteristics, special effects characteristics, and any combination thereof.
According to another embodiment of the present invention, a system as described above is disclosed, additionally comprising instruction of determine area correspondence within matched frames.
According to another embodiment of the present invention, a system as described above is disclosed, wherein the system is configured to present at least a portion of the changed features hierarchy and/or history in a selected from a group consisting of: at least a portion of a user interface, in an external editing software, graphically presented in the RDMF, in a separate file, in a file configured as task list, and any combination thereof.
According to another embodiment of the present invention, a system as described above is disclosed, wherein the analysis module is configured to detect visual and/ or sound changes between the DMFs at the level of: the entire DMFs, at least a portion of the DMFs, the DMF shot level, the DMF frame level, the DMF shot and or audio track transition level, and any combination thereof.
According to another embodiment of the present invention, a system as described above is disclosed, wherein the recognition module is configured to generate at least one fingerprint of a recognized feature comprising tolerances of feature characteristics enabling recognition.
According to another embodiment of the present invention, a system as described above is disclosed, wherein the processor is configured to relate at least a portion of the information of at least one detected change to all of the changed feature occurrences recognized by the recognition module in the same the DMF.
According to another embodiment of the present invention, a system as described above is disclosed, wherein the processor is configured to search and optionally associate an annotation and/ or at least a portion of the changed information, to a feature in the DMF the RDMF, or both. According to another embodiment of the present invention, a system as described above is disclosed, wherein the system is configured to generate at least one retrievable memory log comprising hierarchy and/ or history of detected changes between a plurality of sequential DMFs.
According to another embodiment of the present invention, a system as described above is disclosed, wherein the system is configured to generate at least one retrievable memory log comprising hierarchy and/ or history of recognized features comprising changes between a plurality of sequential DMFs.
According to another embodiment of the present invention, a system as described above is disclosed, wherein the system is configured to generate at least one retrievable memory log comprising details of recognized features of at least one DMF.
According to another embodiment of the present invention, a system as described above is disclosed, wherein the analysis module is configured to analyze the visual and/or audio content of each of the DMFs by performing at least one of the following instructions: (a) determine shot separation within the DMF; (a) determine shot separation within each DMF; (b) determine the location along the timeline of each shot; (c) determine the time length of each shot; (d) determine the visual characteristics of each DMF frame; (e) determine the sound characteristics of each DMF frame; (f) determine at least one shot transitions characteristics of each DMF; (g) determine the time length of the entire DMF for each DMF; (h) matching the frames, shots, or both between at least one first DMF and one second DMF; (i) determine frame correspondence between at least one first DMF frame and at least one second DMF frame; (j) determine shot correspondence between at least one first DMF shot and at least one second DMF shots; (k) determine sound characteristic correspondence between at least one first DMF and at least one second said DMF frames, shots, or both; (1) determine visual characteristics correspondence between at least one first said DMF and at least one second said DMF frames, shots, or both; (m) determine shot transition characteristics correspondence between at least one first DMF and at least one second DMF; (n) determine timing characteristics correspondence between at least one first DMF and at least one second DMF; and, (o) any combination thereof.
According to another embodiment of the present invention, a system as described above is disclosed, wherein the processor is configured to graphically present at least a portion of the detected changes information visually in the RDMF; According to another embodiment of the present invention, a system as described above is disclosed, wherein the processor is configured to present the changes graphically in the RDMF at occurrences of the changed feature selected from a group consisting of: all occurrences of the changed feature, user defined occurrences of the changed feature, from the location of the change and onward along the RDMF timeline, on occurrences of the changed feature in derivative versions of the RDMF, and any combination thereof.
According to another embodiment of the present invention, a system as described above is disclosed, wherein the annotation module is configured to generate at least one annotation in association with at least one DMF feature.
According to another embodiment of the present invention, a system as described above is disclosed, wherein the annotation module is configured to generate one or more of the annotations in a manner selected from a group consisting of: automatically, manually, according to user defined criteria, and any combination thereof.
According to another embodiment of the present invention, a system as described above is disclosed, wherein the annotation module is configured to accept one or more annotations from at least one external source and associate them with defined features in the RDMF.
According to another embodiment of the present invention, a system as described above is disclosed, further comprising an annotation module configured to associate at least one annotation with at least one portion of the RDMF.
According to another embodiment of the present invention, a system as described above is disclosed, wherein the annotation module is configured to generate annotations for one or more detected changes in the RDMF.
According to another embodiment of the present invention, a system as described above is disclosed, wherein each generated annotation of the detected change comprises at least a portion of detected changed information.
According to another embodiment of the present invention, a system as described above is disclosed wherein the annotation module is further configured to index at least one detected changes, index at least one features comprising detected changes, or both.
According to another embodiment of the present invention, a system as described above is disclosed, wherein the annotations comprising at least one selected from a group consisting of: the logged changes, the recognized features comprising changes, the changes information, indexing of the annotation content, comment(s) inserted by at least one user, link(s), images, changes history, and any combination thereof.
According to another embodiment of the present invention, a system as described above is disclosed, wherein the annotation module is configured to edit at least one annotation by performing at least one selected from a group consisting of: adding annotation, deleting annotation, changing annotation details, manipulating graphical representation of the annotation, editing annotation content, editing annotation location on screen, editing annotation location along timeline, edit annotation image, connecting the annotation(s) to at least one second feature, and any combination thereof.
According to another embodiment of the present invention, a system as described above is disclosed, wherein the annotation module is further configured to allow a user by means of a user interface, to edit at least one of the annotations.
According to another embodiment of the present invention, a system as described above is disclosed, wherein the annotation module is configured to generate differential graphical representation of annotations relating to their version source and/ or their place in the changes hierarchy.
According to another embodiment of the present invention, a system as described above is disclosed, wherein the annotation module is configured to generate a task list comprising one or more of the annotations;
According to another embodiment of the present invention, a system as described above is disclosed, wherein the annotation module is configured to be updated following at least one event selected from a group consisting of: detecting a change in the DMF, editing of the annotation by at least one user, and any combination thereof.
According to another embodiment of the present invention, a system as described above is disclosed, wherein the annotation module is configured to forward the task list to at least one third party;
According to another embodiment of the present invention, a system as described above is disclosed, wherein the annotation module is configured to distribute a different set the annotations into different final reviewed digital media files (RDMFs) according to user defined criteria.
According to another embodiment of the present invention, a system as described above is disclosed, wherein the annotation module is configured to index the annotations according to at least one selected from a group consisting of: user defined information, the changes information, the feature comprising change, DMF version, timing within DMF, the annotation generation date, the annotation editing date, the annotation tag associated by a user, and any combination thereof.
According to another embodiment of the present invention, a system as described above is disclosed, wherein the processor is configured to search annotations according to at least one annotation indexing.
According to another embodiment of the present invention, a system as described above is disclosed, wherein the system is configured to generate a history database of the detected changes information in at least one DMF in comparison to at least one other DMF selected from a group consisting of: all received DMF versions, at least one second DMF, a user selected DMF, a most recently received DMF, the first received DMF, at least a portion of a selected DMF, and any combination thereof.
According to another embodiment of the present invention, a system as described above is disclosed, wherein the processor is further configured to generate at least one DMF by: combining at least a portion of different DMFs, combining at least one first and at least one second portion of the same DMF, extract at least one portion from the DMF, and any combination thereof.
According to another embodiment of the present invention, a system as described above is disclosed, wherein the processor is further configured to generate at least one RDMF by: combining at least a portion of different RDMFs, combining at least one first and at least one second portion of the same RDMF, extract at least one portion from the RDMF, and any combination thereof..
According to another embodiment of the present invention, a system as described above is disclosed, wherein the system further comprises at least one rendering module configured to render the RDMF according to user defined criteria. According to another embodiment of the present invention, a system as described above is disclosed, wherein the receiving module is configured to accept the DMF in a rendered form. According to another embodiment of the present invention, a system as described above is disclosed, wherein the receiving module is configured to accept the DMF in a compressed form and extract the rendered form.

Claims

1. A system, useful for reviewing a plurality of sequential Digital Media Files, (DMF), comprising:
a. a receiving module, configured to receive at least one first said DMF and at least one second said DMF;
b. an analysis module configured to detect changes between at least one first said DMF and one second said DMF;
c. a recognition module configured to recognize at least one feature comprising one or more said detected changes along said DMF; and,
d. a non-transitory computer readable storage medium (CRM) operatively in communication with said receiving module, said analysis module and said recognition module, said CRM having computer executable instructions that configure one or more operatively coupled processors to perform said instructions comprising:
i. receive at least one first and at least one second DMFs by means of said receiving module;
ii. compare the visual and/ or audio content of each of said DMF and detect visual and/or audio changes between said DMFs by means of said analysis module; iii. recognize features comprising said detected changes by means of said recognition module;
iv. log said detected changes information and said recognized features comprising changes information;
v. generate at least one reviewed digital media file (RDMF) of said second DMF configured to present at least a portion said detected changes and/or changed feature recognition in comparison to at least one first said DMF; and, vi. forward said at least one RDMF to at least one recipient;
wherein said system is configured to generate at least one said RDMF for every DMF version received in comparison to at least one previous RDMF; further wherein said system is configured to generate at least one file associated with said RDMF comprising sequential hierarchy and/ or history of detected changes and/ or said recognized features between different sequential DMFs.
2. The system according to claim 1, wherein said system is configured to present at least a portion of said changed features hierarchy and/or history in a selected from a group consisting of: at least a portion of a user interface, in an external editing software, graphically presented in said RDMF, in a separate file, in a file configured as task list, and any combination thereof.
3. The system according to claim 1, wherein said analysis module is configured to detect visual and/ or sound changes between said DMFs at the level of: said entire DMFs, at least a portion of said DMFs, said DMF shot level, said DMF frame level, said DMF shot and or audio track transition level, and any combination thereof.
4. The system according to claim 1, wherein said recognition module is configured to generate at least one fingerprint of a recognized feature comprising tolerances of feature characteristics enabling recognition.
5. The system according to claim 1, wherein said processor is configured to relate at least a portion of said information of at least one said detected change to all of said changed feature occurrences recognized by said recognition module in the same said DMF.
6. The system according to claiml, wherein said processor is configured to search and optionally associate an annotation and/ or at least a portion of said changed information, to a feature in said DMF said RDMF, or both.
7. The system according to claim 1 , wherein said system is configured to generate at least one retrievable memory log comprising hierarchy and/ or history of detected changes between a plurality of sequential DMFs.
8. The system according to claim 1, wherein said system is configured to generate at least one retrievable memory log comprising hierarchy and/ or history of recognized features comprising changes between a plurality of sequential DMFs.
9. The system according to claim 1 , wherein said system is configured to generate at least one retrievable memory log comprising details of recognized features of at least one DMF.
10. The system according to claim 1, wherein said analysis module is configured to perform at least one of the following instructions:
a. determine shot separation within each said DMF;
b. determine the location along the timeline of each said shot;
c. determine the time length of each said shot; d. determine the visual characteristics of each said DMF frame;
e. determine the sound characteristics of each said DMF frame;
f. determine at least one shot transitions characteristics of each said DMF;
g. determine the time length of the entire DMF for each said DMF;
h. match the frames, shots, or both between at least one first said DMF and one second i. determine frame correspondence between at least one first said DMF frame and at least one second said DMF frame;
j. determine shot correspondence between at least one first said DMF shot and at least one second said DMF shots;
k. determine sound characteristic correspondence between at least one first said DMF and at least one second said DMF frames, shots, or both;
1. determine visual characteristics correspondence between at least one first said DMF and at least one second said DMF frames, shots, or both;
m. determine shot transition characteristics correspondence between at least one first said
DMF and at least one second said DMF;
n. determine timing characteristics correspondence between at least one first said DMF and at least one second said DMF; and,
o. any combination thereof.
11. The system according to claim 1, wherein said analysis module further comprises at least one matching module operatively in communication with said CRM, configured to match corresponding portions of said DMFs; said CRM further comprising computer executable instructions that configure one or more to preform said instructions comprising:
a. receiving at least one first and at least one second DMFs by said receiving module;
b. distinguishing between said different shots to determine shot separators by said matching module in each said DMF ;
c. determining shot correspondence between said shots in said first said DMF and second said second DMF, to determine matched shots by said matching module; d. determining frame correspondence between said frames in said first said DMF and second said second DMF, within said matched shots to determine matched frames by said matching module;
e. detect differences between said DMFs in level selected from a group consisting of: entire DMF level, frame level, shot level, timing level, transition level, sound level and any combination thereof, by said analysis module; and,
f. logging said detected changes and/ or detected features comprising said changes.
12. The system according to claim 1, wherein said processor is configured to graphically present at least a portion of said detected changes information visually in said RDMF;
13. The system according to claim 1, wherein said processor is configured to present said changes graphically in said RDMF at occurrences of said changed feature selected from a group consisting of: all occurrences of said changed feature, user defined occurrences of said changed feature, from the location of said change and onward along said RDMF timeline, on occurrences of said changed feature in derivative versions of said RDMF, and any combination thereof.
14. The system according to claim 1, wherein said system further comprises at least one annotation module operatively in communication with said processor, said annotation module is configured to generate at least one annotation in association with at least one DMF feature.
15. The system according to claim 14, wherein said annotation module is configured to generate one or more of said annotations in a manner selected from a group consisting of: automatically, manually, according to user defined criteria, and any combination thereof.
16. The system according to claim 14, wherein said annotation module is configured to accept one or more annotations from at least one external source and associate them with defined features in said RDMF.
17. The system according to claim 14, further comprising an annotation module configured to associate at least one annotation with at least one portion of said RDMF.
18. The system according to claim 14, wherein said annotation module is configured to generate annotations for one or more detected changes in said RDMF.
19. The system according to claim 14, wherein each said generated annotation of said detected change comprises at least a portion of detected changed information.
20. The system according to claim 14 wherein said annotation module is further configured to index at least one said detected changes, index at least one said features comprising detected changes, or both.
21. The system according to claim 14, wherein said annotations comprising at least one selected from a group consisting of: said logged changes, said recognized features comprising changes, said changes information, indexing of said annotation content, comment(s) inserted by at least one user, link(s), images, changes history, and any combination thereof.
22. The system according to claim 14, wherein said annotation module is configured to edit at least one said annotation by performing at least one selected from a group consisting of: adding annotation, deleting annotation, changing annotation details, manipulating graphical representation of said annotation, editing annotation content, editing annotation location on screen, editing annotation location along timeline, edit annotation image, connecting said annotation(s) to at least one second feature, and any combination thereof.
23. The system according to claim 14, wherein said annotation module is further configured to allow a user by means of a user interface, to edit at least one of said annotations.
24. The system according to claim 14, wherein said annotation module is configured to generate differential graphical representation of annotations relating to their version source and/ or their place in said changes hierarchy.
25. The system according to claim 14, wherein said annotation module is configured to generate a task list comprising one or more of said annotations.
26. The system according to claim 25, wherein said annotation module is configured to be updated following at least one event selected from a group consisting of: detecting a change in said DMF, editing of said annotation by at least one said user, and any combination thereof.
27. The system according to claim 25, wherein said annotation module is configured to forward said task list to at least one third party.
28. The system according to claim 25, wherein said annotation module is configured to distribute a different set said annotations into different final reviewed digital media files (RDMFs) according to user defined criteria.
29. The system according to claim 14, wherein said annotation module is configured to index said annotations according to at least one selected from a group consisting of: user defined information, said changes information, said feature comprising change, DMF version, timing within DMF, said annotation generation date, said annotation editing date, said annotation tag associated by a user, and any combination thereof.
30. The system according to claim 14, wherein said processor is configured to search annotations according to at least one said annotation indexing.
31. The system according to claim 1, wherein said system is configured to generate a history database of said detected changes information in at least one DMF in comparison to at least one other DMF selected from a group consisting of: all received DMF versions, at least one second DMF, a user selected DMF, a most recently received DMF, said first received DMF, at least a portion of a selected DMF, and any combination thereof.
32. The system according to claim 1, wherein said processor is further configured to generate at least one DMF by: combining at least a portion of different DMFs, combining at least one first and at least one second portion of the same DMF, extract at least one portion from said DMF, and any combination thereof.
33. The system according to claim 1, wherein said processor is further configured to generate at least one RDMF by: combining at least a portion of different RDMFs, combining at least one first and at least one second portion of the same RDMF, extract at least one portion from said RDMF, and any combination thereof.
34. The system according to claim 1, wherein said system further comprises at least one rendering module configured to render said RDMF according to user defined criteria.
35. The system according to claim 1, wherein said receiving module is configured to accept said DMF in a rendered form.
36. The system according to claim 1, wherein said receiving module is configured to accept said DMF in a compressed form and extract the rendered form.
37. A computer- readable storage medium having stored therein a computer program loadable into a processor of a communication system; said communication system comprising a communication network attached to one or more end users; wherein said computer program comprises code adapted to perform a method for reviewing a plurality of sequential Digital Media Files (DMFs); said method comprising: a. receiving at least one first and at least one second DMFs from at least one said user; b. comparing the visual and/ or audio content of each of said DMFs and detect visual and/or audio changes between said DMFs;
c. recognizing one or more features comprising said detected changes;
d. logging said detected changes information and said recognition information;
e. generating at least one reviewed digital media file (RDMF) comprising said second DMF configured to present at least a portion said detected changes information and said feature recognition in comparison to at least one first said DMF; and,
f. forwarding said at least one RDMF to at least one recipient;
wherein said step (e) additionally comprising the steps of:
a. generating at least one said RDMF for every DMF version received in comparison to at least one previous RDMF; and,
b. adding said newly generated changed information to previous said changes information thereby generating a RDMF comprising sequential hierarchy and/ or history of detected changes and/ or said changed feature recognition between different sequential DMFs.
38. The method according to claim 37, additionally comprising the step of presenting at least a portion of said changed features hierarchy and/or history in a selected from a group consisting of: at least a portion of a user interface, in an external editing software, graphically presented in said RDMF, in a separate file, in a file configured as task list, and any combination thereof.
39. The method according to claim 37, additionally comprising the step of detecting visual and/ or sound changes between said DMFs at the level of: said entire DMFs, at least a portion of said DMFs, said DMF shot level, said DMF frame level, said DMF shot and or audio track transition level, and any combination thereof.
40. The method according to claim 37, additionally comprising the step of generating at least one fingerprint of a recognized feature comprising tolerances of feature characteristics enabling recognition.
41. The method according to claim 37, additionally comprising the step of relating at least a portion of said information of at least one said detected change to all of said changed feature occurrences recognized in the same said DMF.
42. The method according to claim 37, additionally comprising the step of searching and optionally associating an annotation and/ or at least a portion of said changed information, to a feature in said DMF said RDMF, or both.
43. The method according to claim 37, additionally comprising the step of generating at least one retrievable memory log comprising hierarchy and/ or history of detected changes between a plurality of sequential DMFs.
44. The method according to claim 37, additionally comprising the step of generating at least one retrievable memory log comprising hierarchy and/ or history of recognized features comprising changes between a plurality of sequential DMFs.
45. The method according to claim 37, additionally comprising the step of generating at least one retrievable memory log comprising details of recognized features of at least one DMF.
46. The method according to claim 37, further wherein said step of comparing of said visual and/or audio content of each said DMFs follows performing at least one of:
a. determining shot separation within each said DMF;
b. determining the location along the timeline of each said shot;
c. determining the time length of each said shot;
d. determining the visual characteristics of each said DMF frame;
e. determining the sound characteristics of each said DMF frame;
f. determining at least one shot transitions characteristics of each said DMF;
g. determining the time length of the entire DMF for each said DMF;
h. matching the frames, shots, or both between at least one first said DMF and one second said DMF;
i. determining frame correspondence between at least one first said DMF frame and at least one second said DMF frame;
j. determining shot correspondence between at least one first said DMF shot and at least one second said DMF shots;
k. determining sound characteristic correspondence between at least one first said DMF and at least one second said DMF frames, shots, or both;
1. determining visual characteristics correspondence between at least one first said
DMF and at least one second said DMF frames, shots, or both; m. determining shot transition characteristics correspondence between at least one first said DMF and at least one second said DMF;
n. determining timing characteristics correspondence between at least one first said
DMF and at least one second said DMF; and,
o. any combination thereof.
47. The method according to claim 37, wherein said step of detecting changes additionally comprises the steps of: distinguishing between said different shots to determine shot separators in each said DMF ;
a. determining shot correspondence between said shots in said first said DMF and second said second DMF, to determine matched shots;
b. determining frame correspondence between said frames in said first said DMF and second said second DMF, within said matched shots to determine matched frames; and,
c. detecting differences between said DMFs in level selected from a group consisting of: entire DMF level, frame level, shot level, timing level, transition level, sound level and any combination thereof, according to said correspondence.
48. The method according to claim 37, wherein said annotation of detected change feature is associated with all of said changed feature derivatives in said RDMF and in sequential derivative RDMFs.
49. The method according to claim 37, additionally comprising the step of graphically presenting at least a portion of said detected changes information visually in said RDMF;
50. The method according to claim 37, additionally comprising the step of presenting said changes graphically in said RDMF at occurrences of said changed feature selected from a group consisting of: all occurrences of said changed feature, user defined occurrences of said changed feature, from the location of said change and onward along said RDMF timeline, on occurrences of said changed feature in derivative versions of said RDMF, and any combination thereof.
51. The method according to claim 37, additionally comprising the step of further providing said system with at least one annotation module operatively in communication with said processor, said annotation module is configured to generate at least one annotation in association with at least one DMF feature.
52. The method according to claim 51, additionally comprising the step of said annotation module generating one or more of said annotations in a manner selected from a group consisting of: automatically, manually, according to user defined criteria, and any combination thereof.
53. The method according to claim 51, additionally comprising the step of said annotation module accepting one or more annotations from at least one external source and associating them with defined features in said RDMF.
54. The method according to claim 51, additionally comprising the step of said annotation module associating at least one annotation with at least one portion of said RDMF.
55. The method according to claim 51, additionally comprising the step of said annotation generating annotations for one or more detected changes in said RDMF.
56. The method according to claim 51, additionally comprising the step of each said generated annotation of said detected change comprises at least a portion of detected changed information.
57. The method according to claim 51, additionally comprising the step of said annotation module indexing at least one said detected changes, indexing at least one said features comprising detected changes, or both.
58. The method according to claim 51, wherein said annotations comprising at least one selected from a group consisting of: said logged changes, said recognized features comprising changes, said changes information, indexing of said annotation content, comment(s) inserted by at least one user, link(s), images, changes history, and any combination thereof.
59. The method according to claim 51, additionally comprising the step of: said annotation module editing at least one said annotation by performing at least one selected from a group consisting of: adding annotation, deleting annotation, changing annotation details, manipulating graphical representation of said annotation, editing annotation content, editing annotation location on screen, editing annotation location along timeline, edit annotation image, connecting said annotation(s) to at least one second feature, and any combination thereof.
60. The method according to claim 51, additionally comprising the step of said annotation module enabling a user by means of a user interface, to edit at least one of said annotations.
61. The method according to claim 51, additionally comprising the step of said annotation module generating differential graphical representation of annotations relating to their version source and/ or their place in said changes hierarchy.
62. The method according to claim 51, additionally comprising the step of said annotation module generating a task list comprising one or more of said annotations;
63. The method according to claim 62, additionally comprising the step of said annotation module updating said task list following at least one event selected from a group consisting of: detecting a change in said DMF, editing of said annotation by at least one said user, and any combination thereof.
64. The method according to claim 62, additionally comprising the step of said annotation module forwarding said task list to at least one third party.
65. The method according to claim 51, additionally comprising the step of said annotation module distributing a different set said annotations into different final reviewed digital media files (RDMFs) according to user defined criteria.
66. The method according to claim 51, additionally comprising the step of said annotation module indexing said annotations according to at least one selected from a group consisting of: user defined information, said changes information, said feature comprising change, DMF version, timing within DMF, said annotation generation date, said annotation editing date, said annotation tag associated by a user, and any combination thereof.
67. The method according to claim 51, additionally comprising the step of said processor searching said annotations according to at least one said annotation indexing.
68. The method according to claim 37, additionally comprising the step of said system generating a history database of said detected changes information in at least one DMF in comparison to at least one other DMF selected from a group consisting of: all received DMF versions, at least one second DMF, a user selected DMF, a most recently received DMF, said first received DMF, at least a portion of a selected DMF, and any combination thereof.
69. The method according to claim 37, additionally comprising the step of said processor generating at least one DMF by: combining at least a portion of different DMFs, combining at least one first and at least one second portion of the same DMF, extract at least one portion from said DMF, and any combination thereof.
70. The method according to claim 37, additionally comprising the step of said processor generating at least one RDMF by: combining at least a portion of different RDMFs, combining at least one first and at least one second portion of the same RDMF, extract at least one portion from said RDMF, and any combination thereof .
71. The method according to claim 37, additionally comprising the step of providing said system further comprising at least one rendering module configured to render said RDMF according to user defined criteria.
72. The method according to claim 37, additionally comprising the step of said receiving module accepting said DMF in a rendered form.
73. The method according to claim 37, additionally comprising the step of said receiving module accepting said DMF in a compressed form and extracting the rendered form.
74. A computer implemented method for reviewing of Digital Media Files (DMF) received from at least one user-end, via a communication system, to at least one server computer comprising the steps of:
a. providing:
i. said server comprising at least one memory storage operatively coupled with at least one processor :
ii. at least one receiving module operatively in communication with said server configured to receive first and at least one second DMFs from at least one said user;
iii. at least one analysis module operatively in communication with said server configured to detect visual and/or audio changes between said DMFs; and, iv. at least one recognition module operatively in communication with said server configured to recognize features comprising changes between said DMFs
b. receiving at least one first and at least one second DMFs by means of said receiving module; c. comparing the visual and/ or audio content of each of said DMFs and detect visual and/or sound changes between said DMFs by means of said analysis module; d. recognizing features comprising said detected changes by means of said recognition module;
e. logging said detected changes information and said recognition information; f. generating at least one reviewed digital media file (RDMF) comprising said second DMF configured to present at least a portion of said detected changes information and said changed feature recognition in comparison to at least one first said DMF; and,
g. forward said at least one RDMF to at least one recipient;
wherein said step (e) additionally comprising the steps of:
a. generating at least one said RDMF for every DMF version received in comparison to at least one previous RDMF; and,
b. adding said newly generated changed information to previous said changes information thereby generating a RDMF comprising sequential hierarchy and/ or history of detected changes and/ or said recognized features between different sequential DMFs.
75. The method according to claim 74, additionally comprising the step of presenting at least a portion of said changed features hierarchy and/or history in a selected from a group consisting of: at least a portion of a user interface, in an external editing software, graphically presented in said RDMF, in a separate file, in a file configured as task list, and any combination thereof.
76. The method according to claim 74, additionally comprising the step of said analysis module detecting visual and/ or sound changes between said DMFs at the level of: said entire DMFs, at least a portion of said DMFs, said DMF shot level, said DMF frame level, said DMF shot and or audio track transition level, and any combination thereof.
77. The method according to claim 74, additionally comprising the step of said recognition module generating at least one fingerprint of a recognized feature comprising tolerances of feature characteristics enabling recognition.
78. The method according to claim 74, additionally comprising the step of relating at least a portion of said information of at least one said detected change to all of said changed feature occurrences recognized by said recognition module in the same said DMF.
79. The method according to claim 74, additionally comprising the step of searching and optionally associating an annotation and/ or at least a portion of said changed information, to a feature in said DMF said RDMF, or both.
80. The method according to claim 74, additionally comprising the step of generating at least one retrievable memory log comprising hierarchy and/ or history of detected changes between a plurality of sequential DMFs.
81. The method according to claim 74, additionally comprising the step of generating at least one retrievable memory log comprising hierarchy and/ or history of recognized features comprising changes between a plurality of sequential DMFs.
82. The method according to claim 74, additionally comprising the step of generating at least one retrievable memory log comprising details of recognized features of at least one DMF.
83. The method according to claim 74, additionally comprising the step of said analysis module further performing at least one of the following instructions:
a. determining shot separation within each said DMF;
b. determining the location along the timeline of each said shot;
c. determining the time length of each said shot;
d. determining the visual characteristics of each said DMF frame;
e. determining the sound characteristics of each said DMF frame;
f. determining at least one shot transitions characteristics of each said DMF;
g. determining the time length of the entire DMF for each said DMF;
h. matching the frames, shots, or both between at least one first said DMF and one second said DMF;
i. determining frame correspondence between at least one first said DMF frame and at least one second said DMF frame;
j. determining shot correspondence between at least one first said DMF shot and at least one second said DMF shots; k. determining sound characteristic correspondence between at least one first said DMF and at least one second said DMF frames, shots, or both;
1. determining visual characteristics correspondence between at least one first said
DMF and at least one second said DMF frames, shots, or both;
m. determining shot transition characteristics correspondence between at least one first said DMF and at least one second said DMF;
n. determining timing characteristics correspondence between at least one first said
DMF and at least one second said DMF; and,
o. any combination thereof.
84. The method of claim 74, said step of detecting changes by said analysis module additionally comprising the steps of: further providing at least one matching module, operatively in communication with said server, configured to match corresponding portions of said DMFs; further wherein said server to perform said instructions comprising:
a. receive at least one first and at least one second DMFs by said receiving module; b. distinguish between said different shots to determine shot separators by said matching module in each said DMF;
c. determine shot correspondence between said shots in said first said DMF and second said second DMF, to determine matched shots by said matching module; d. determine frame correspondence between said frames in said first said DMF and second said second DMF, within said matched shots to determine matched frames by said matching module;
e. detect differences between said DMFs in level selected from a group consisting of: entire DMF level, frame level, shot level, timing level, transition level, sound level and any combination thereof, by said analysis module; and,
f. log said detected changes and/ or detected features comprising said changes.
85. The method according to claim 74, additionally comprising the step of graphically presenting at least a portion of said detected changes information visually in said RDMF.
86. The method according to claim 74, additionally comprising the step of presenting said changes graphically in said RDMF at occurrences of said changed feature selected from a group consisting of: all occurrences of said changed feature, user defined occurrences of said changed feature, from the location of said change and onward along said RDMF timeline, on occurrences of said changed feature in derivative versions of said RDMF, and any combination thereof.
87. The method according to claim 74, additionally comprising the step of further providing said system with at least one annotation module operatively in communication with said processor, said annotation module is configured to generate at least one annotation in association with at least one DMF feature.
88. The method according to claim 87, additionally comprising the step of said annotation module generating one or more of said annotations in a manner selected from a group consisting of: automatically, manually, according to user defined criteria, and any combination thereof.
89. The method according to claim 87, additionally comprising the step of said annotation module accepting one or more annotations from at least one external source and associating them with defined features in said RDMF.
90. The method according to claim 87, additionally comprising the step of said annotation module associating at least one annotation with at least one portion of said RDMF.
91. The method according to claim 87, additionally comprising the step of said annotation generating annotations for one or more detected changes in said RDMF.
92. The method according to claim 87, additionally comprising the step of each said generated annotation of said detected change comprises at least a portion of detected changed information.
93. The method according to claim 87, additionally comprising the step of said annotation module indexing at least one said detected changes, indexing at least one said features comprising detected changes, or both.
94. The method according to claim 87, wherein said annotations comprising at least one selected from a group consisting of: said logged changes, said recognized features comprising changes, said changes information, indexing of said annotation content, comment(s) inserted by at least one user, link(s), images, changes history, and any combination thereof.
95. The method according to claim 87, additionally comprising the step of: said annotation module editing at least one said annotation by performing at least one selected from a group consisting of: adding annotation, deleting annotation, changing annotation details, manipulating graphical representation of said annotation, editing annotation content, editing annotation location on screen, editing annotation location along timeline, edit annotation image, connecting said annotation(s) to at least one second feature, and any combination thereof.
96. The method according to claim 87, additionally comprising the step of said annotation module enabling a user by means of a user interface, to edit at least one of said annotations.
97. The method according to claim 87, additionally comprising the step of said annotation module generating differential graphical representation of annotations relating to their version source and/ or their place in said changes hierarchy.
98. The method according to claim 87, additionally comprising the step of said annotation module generating a task list comprising one or more of said annotations.
99. The method according to claim 98, additionally comprising the step of said annotation module updating said task list following at least one event selected from a group consisting of: detecting a change in said DMF, editing of said annotation by at least one said user, and any combination thereof.
100. The method according to claim 99, additionally comprising the step of said annotation module forwarding said task list to at least one third party.
101. The method according to claim 87, additionally comprising the step of said annotation module distributing a different set said annotations into different final reviewed digital media files (RDMFs) according to user defined criteria.
102. The method according to claim 87, additionally comprising the step of said annotation module indexing said annotations according to at least one selected from a group consisting of: user defined information, said changes information, said feature comprising change, DMF version, timing within DMF, said annotation generation date, said annotation editing date, said annotation tag associated by a user, and any combination thereof.
103. The method according to claim 74, additionally comprising the step of said processor searching said annotations according to at least one said annotation indexing.
104. The method according to claim 74, additionally comprising the step of said system generating a history database of said detected changes information in at least one DMF in comparison to at least one other DMF selected from a group consisting of: all received DMF versions, at least one second DMF, a user selected DMF, a most recently received DMF, said first received DMF, at least a portion of a selected DMF, and any combination thereof.
105. The method according to claim 74, additionally comprising the step of said processor generating at least one DMF by: combining at least a portion of different DMFs, combining at least one first and at least one second portion of the same DMF, extract at least one portion from said DMF, and any combination thereof.
106. The method according to claim 74, additionally comprising the step of said processor generating at least one RDMF by: combining at least a portion of different RDMFs, combining at least one first and at least one second portion of the same RDMF, extract at least one portion from said RDMF, and any combination thereof .
107. The method according to claim 74, additionally comprising the step of providing said system further comprising at least one rendering module configured to render said RDMF according to user defined criteria.
108. The method according to claim 74, additionally comprising the step of said receiving module accepting said DMF in a rendered form.
109. The method according to claim 74, additionally comprising the step of said receiving module accepting said DMF in a compressed form and extracting the rendered form.
110. A computer implemented method for detecting and annotating changes between two sequential versions of Digital Media Files (DMFs), said DMFs each comprising one or more shots, each said shot comprising one or more frames, said method characterized by the steps of:
a. providing:
i. at least one receiving module configured to receive first and at least one second DMFs; ii. at least one matching module configured to match corresponding portions of said DMFs;
iii. at least one analysis module configured to detect visual and/or audio changes between said DMFs,
iv. at least one recognition module configured to recognize features comprising changes between said DMFs;
v. at least one annotation module configured to generate at least one annotation; and,
vi. a non-transitory computer readable storage medium (CRM), operatively in communication with said receiving module, said analysis module, said matching module, annotation module and said recognition module, said CRM having computer executable instructions that configure one or more operatively coupled processors to perform said instructions comprising:
b. receive at least one first and at least one second DMFs by said receiving module; c. distinguish between said different shots to determine shot separators by said matching module in each said DMF ;
d. determine shot correspondence between said shots in said first said DMF and second said second DMF, to determine matched shots by said matching module; e. determine frame correspondence between said frames in said first said DMF and second said second DMF, within said matched shots to determine matched frames by said matching module;
f. detect differences between said DMFs in level selected from a group consisting of: entire DMF level, frame level, shot level, timing level, transition level, sound level and any combination thereof, by said analysis module;
g. recognize one or more features associated with a detected difference, by said recognition module;
h. log said detected changes and/ or detected features comprising said changes; i. generate at least one reviewed digital media file (RDMF) comprising said second DMF configured to present at least a portion of said detected changes information and said feature recognition of in comparison to at least one first said DMF; and, j. annotate each said detected change by said annotation module;
further wherein said annotation of detected change feature is transferred to all of said changed feature derivatives in said RDMF and in sequential derivative RDMFs.
111. The method according to claim 110, wherein said correspondence of a shot and /or a frame comprises correspondence of at least one characteristic selected from a group consisting of: sound characteristics, visual characteristics, timing characteristics, transition characteristics, textual characteristics, 3D object characteristics, 2D object characteristics, special effects characteristics, and any combination thereof.
112. The method according to claim 110, additionally comprising the step of determining area correspondence within matched frames.
113. The method according to claim 110, additionally comprising the step of presenting at least a portion of said changed features hierarchy and/or history in a selected from a group consisting of: at least a portion of a user interface, in an external editing software, graphically presented in said RDMF, in a separate file, in a file configured as task list, and any combination thereof.
114. The method according to claim 110, additionally comprising the step of said analysis module detecting visual and/ or sound changes between said DMFs at the level of: said entire DMFs, at least a portion of said DMFs, said DMF shot level, said DMF frame level, said DMF shot and or audio track transition level, and any combination thereof.
115. The method according to claim 110, additionally comprising the step of said recognition module generating at least one fingerprint of a recognized feature comprising tolerances of feature characteristics enabling recognition.
116. The method according to claim 110, additionally comprising the step of relating at least a portion of said information of at least one said detected change to all of said changed feature occurrences recognized by said recognition module in the same said DMF.
117. The method according to claim 110, additionally comprising the step of searching and optionally associating an annotation and/ or at least a portion of said changed information, to a feature in said DMF said RDMF, or both.
118. The method according to claim 110, additionally comprising the step of generating at least one retrievable memory log comprising hierarchy and/ or history of detected changes between a plurality of sequential DMFs.
119. The method according to claim 110, additionally comprising the step of generating at least one retrievable memory log comprising hierarchy and/ or history of recognized features comprising changes between a plurality of sequential DMFs.
120. The method according to claim 110, additionally comprising the step of generating at least one retrievable memory log comprising details of recognized features of at least one DMF.
121. The method according to claim 110, additionally comprising the step of said analysis module performing analysis of said visual and/or audio content of each said DMFs by performing at least one of the following instructions:
a. determining shot separation within each said DMF;
b. determining the location along the timeline of each said shot;
c. determining the time length of each said shot;
d. determining the visual characteristics of each said DMF frame;
e. determining the sound characteristics of each said DMF frame;
f. determining at least one shot transitions characteristics of each said DMF;
g. determining the time length of the entire DMF for each said DMF;
h. matching the frames, shots, or both between at least one first said DMF and one second said DMF;
i. determining frame correspondence between at least one first said DMF frame and at least one second said DMF frame;
j. determining shot correspondence between at least one first said DMF shot and at least one second said DMF shots;
k. determining sound characteristic correspondence between at least one first said DMF and at least one second said DMF frames, shots, or both;
1. determining visual characteristics correspondence between at least one first said
DMF and at least one second said DMF frames, shots, or both;
m. determining shot transition characteristics correspondence between at least one first said DMF and at least one second said DMF;
n. determining timing characteristics correspondence between at least one first said
DMF and at least one second said DMF; and,
o. any combination thereof.
122. The method according to claim 110, additionally comprising the step of graphically presenting at least a portion of said detected changes information visually in said RDMF;
123. The method according to claim 110, additionally comprising the step of presenting said changes graphically in said RDMF at occurrences of said changed feature selected from a group consisting of: all occurrences of said changed feature, user defined occurrences of said changed feature, from the location of said change and onward along said RDMF timeline, on occurrences of said changed feature in derivative versions of said RDMF, and any combination thereof.
124. The method according to claim 110, additionally comprising the step of further providing said system with at least one annotation module operatively in communication with said processor, said annotation module is configured to generate at least one annotation in association with at least one DMF feature.
125. The method according to claim 124, additionally comprising the step of said annotation module generating one or more of said annotations in a manner selected from a group consisting of: automatically, manually, according to user defined criteria, and any combination thereof.
126. The method according to claim 124, additionally comprising the step of said annotation module accepting one or more annotations from at least one external source and associating them with defined features in said RDMF.
127. The method according to claim 124, additionally comprising the step of said annotation module associating at least one annotation with at least one portion of said RDMF.
128. The method according to claim 124, additionally comprising the step of said annotation generating annotations for one or more detected changes in said RDMF.
129. The method according to claim 124, additionally comprising the step of each said generated annotation of said detected change comprises at least a portion of detected changed information.
130. The method according to claim 124, additionally comprising the step of said annotation module indexing at least one said detected changes, indexing at least one said features comprising detected changes, or both.
131. The method according to claim 124, wherein said annotations comprising at least one selected from a group consisting of: said logged changes, said recognized features comprising changes, said changes information, indexing of said annotation content, comment(s) inserted by at least one user, link(s), images, changes history, and any combination thereof.
132. The method according to claim 124, additionally comprising the step of: said annotation module editing at least one said annotation by performing at least one selected from a group consisting of: adding annotation, deleting annotation, changing annotation details, manipulating graphical representation of said annotation, editing annotation content, editing annotation location on screen, editing annotation location along timeline, edit annotation image, connecting said annotation(s) to at least one second feature, and any combination thereof.
133. The method according to claim 124, additionally comprising the step of said annotation module enabling a user by means of a user interface, to edit at least one of said annotations.
134. The method according to claim 124, additionally comprising the step of said annotation module generating differential graphical representation of annotations relating to their version source and/ or their place in said changes hierarchy.
135. The method according to claim 124, additionally comprising the step of said annotation module generating a task list comprising one or more of said annotations.
136. The method according to claim 135, additionally comprising the step of said annotation module updating said task list following at least one event selected from a group consisting of: detecting a change in said DMF, editing of said annotation by at least one said user, and any combination thereof.
137. The method according to claim 135, additionally comprising the step of said annotation module forwarding said task list to at least one third party.
138. The method according to claim 124, additionally comprising the step of said annotation module distributing a different set said annotations into different final reviewed digital media files (RDMFs) according to user defined criteria.
139. The method according to claim 124, additionally comprising the step of said annotation module indexing said annotations according to at least one selected from a group consisting of: user defined information, said changes information, said feature comprising change, DMF version, timing within DMF, said annotation generation date, said annotation editing date, said annotation tag associated by a user, and any combination thereof.
140. The method according to claim 124, additionally comprising the step of said processor searching said annotations according to at least one said annotation indexing.
141. The method according to claim 110, additionally comprising the step of said system generating a history database of said detected changes information in at least one DMF in comparison to at least one other DMF selected from a group consisting of: all received DMF versions, at least one second DMF, a user selected DMF, a most recently received DMF, said first received DMF, at least a portion of a selected DMF, and any combination thereof.
142. The method according to claim 110, additionally comprising the step of said processor generating at least one DMF by: combining at least a portion of different DMFs, combining at least one first and at least one second portion of the same DMF, extract at least one portion from said DMF, and any combination thereof.
143. The method according to claim 110, additionally comprising the step of said processor generating at least one RDMF by: combining at least a portion of different RDMFs, combining at least one first and at least one second portion of the same RDMF, extract at least one portion from said RDMF, and any combination thereof.
144. The method according to claim 110, additionally comprising the step of providing said system further comprising at least one rendering module configured to render said RDMF according to user defined criteria.
145. The method according to claim 110, additionally comprising the step of said receiving module accepting said DMF in a rendered form.
146. The method according to claim 110, additionally comprising the step of said receiving module accepting said DMF in a compressed form and extracting the rendered form.
147. A detection and annotation system, useful for detecting and annotating changes between two sequential versions of Digital Media Files (DMFs), said DMFs each comprising one or more shots, each said shot comprising one or more frames, said system comprises: a. at least one receiving module operatively in communication with said server configured to receive first and at least one second DMFs; at least one matching module configured to match corresponding portions of said DMFs;
at least one analysis module configured to detect visual and/or audio changes between said DMFs,
at least one recognition module configured to recognize features comprising changes between said DMFs.
at least one annotation module configured to generate at least one annotation; and, a non-transitory computer readable storage medium (CRM) in communication with said receiving module, matching module, said analysis module, said recognition module, and said annotation module; said CRM, operatively coupled to at least one processors, having computer executable instructions that configure one or more to preform said instructions comprising:
i. receiving at least one first and at least one second DMFs by said receiving module;
ii. distinguishing between said different shots to determine shot separators by said matching module in each said DMF ;
iii. determining shot correspondence between said shots in said first said DMF and second said second DMF, to determine matched shots by said matching module;
iv. determining frame correspondence between said frames in said first said DMF and second said second DMF, within said matched shots to determine matched frames by said matching module;
v. detect differences between said DMFs in level selected from a group consisting of: entire DMF level, frame level, shot level, timing level, transition level, sound level and any combination thereof, by said analysis module;
vi. recognizing one or more features associated with a detected difference, by said recognition module;
vii. logging said detected changes and/ or detected features comprising said changes; viii. generating at least one reviewed digital media file (RDMF) comprising said second DMF configured to present at least a portion of said detected changes information and said feature recognition of in comparison to at least one first said DMF; and,
ix. annotating each said detected change by said annotation module; further wherein said annotation of detected change feature is transferred to all of said changed feature derivative files in said DMF and sequential DMFs.
148. The system according to claim 147, wherein said correspondence of a shot and /or a frame comprises correspondence of at least one characteristic selected from a group consisting of: sound characteristics, visual characteristics, timing characteristics, transition characteristics, textual characteristics, 3D object characteristics, 2D object characteristics, special effects characteristics, and any combination thereof,
149. The system according to claim 147, additionally comprising instruction of determine area correspondence within matched frames.
150. The system according to claim 147, wherein said system is configured to present at least a portion of said changed features hierarchy and/or history in a selected from a group consisting of: at least a portion of a user interface, in an external editing software, graphically presented in said RDMF, in a separate file, in a file configured as task list, and any combination thereof.
151. The system according to claim 147, wherein said analysis module is configured to detect visual and/ or sound changes between said DMFs at the level of: said entire DMFs, at least a portion of said DMFs, said DMF shot level, said DMF frame level, said DMF shot and or audio track transition level, and any combination thereof.
152. The system according to claim 147, wherein said recognition module is configured to generate at least one fingerprint of a recognized feature comprising tolerances of feature characteristics enabling recognition.
153. The system according to claim 147, wherein said processor is configured to relate at least a portion of said information of at least one said detected change to all of said changed feature occurrences recognized by said recognition module in the same said DMF.
154. The system according to claiml, wherein said processor is configured to search and optionally associate an annotation and/ or at least a portion of said changed information, to a feature in said DMF said RDMF, or both.
155. The system according to claim 147, wherein said system is configured to generate at least one retrievable memory log comprising hierarchy and/ or history of detected changes between a plurality of sequential DMFs.
156. The system according to claim 147, wherein said system is configured to generate at least one retrievable memory log comprising hierarchy and/ or history of recognized features comprising changes between a plurality of sequential DMFs.
157. The system according to claim 147, wherein said system is configured to generate at least one retrievable memory log comprising details of recognized features of at least one DMF.
158. The system according to claim 147, wherein said analysis module is configured to analyze said visual and/or audio content of each of said DMFs by performing at least one of the following instructions:
a. determine shot separation within each said DMF;
b. determine the location along the timeline of each said shot;
c. determine the time length of each said shot;
d. determine the visual characteristics of each said DMF frame;
e. determine the sound characteristics of each said DMF frame;
f. determine at least one shot transitions characteristics of each said DMF;
g. determine the time length of the entire DMF for each said DMF;
h. match the frames, shots, or both between at least one first said DMF and one second i. determine frame correspondence between at least one first said DMF frame and at least one second said DMF frame;
j. determine shot correspondence between at least one first said DMF shot and at least one second said DMF shots;
k. determine sound characteristic correspondence between at least one first said DMF and at least one second said DMF frames, shots, or both;
1. determine visual characteristics correspondence between at least one first said DMF and at least one second said DMF frames, shots, or both;
m. determine shot transition characteristics correspondence between at least one first said DMF and at least one second said DMF;
n. determine timing characteristics correspondence between at least one first said DMF and at least one second said DMF; and,
o. any combination thereof.
159. The system according to claim 147, wherein said processor is configured to graphically present at least a portion of said detected changes information visually in said RDMF.
160. The system according to claim 147, wherein said processor is configured to present said changes graphically in said RDMF at occurrences of said changed feature selected from a group consisting of: all occurrences of said changed feature, user defined occurrences of said changed feature, from the location of said change and onward along said RDMF timeline, on occurrences of said changed feature in derivative versions of said RDMF, and any combination thereof.
161. The system according to claim 147, wherein said system further comprises at least one annotation module operatively in communication with said processor, said annotation module is configured to generate at least one annotation in association with at least one DMF feature.
162. The system according to claim 161, wherein said annotation module is configured to generate one or more of said annotations in a manner selected from a group consisting of: automatically, manually, according to user defined criteria, and any combination thereof.
163. The system according to claim 161, wherein said annotation module is configured to generate one or more of said annotations in a manner selected from a group consisting of: automatically, manually, according to user defined criteria, and any combination thereof.
164. The system according to claim 161, wherein said annotation module is configured to accept one or more annotations from at least one external source and associate them with defined features in said RDMF.
165. The system according to claim 161, further comprising an annotation module configured to associate at least one annotation with at least one portion of said RDMF.
Ill
166. The system according to claim 161, wherein said annotation module is configured to generate annotations for one or more detected changes in said RDMF.
167. The system according to claim 161, wherein each said generated annotation of said detected change comprises at least a portion of detected changed information.
168. The system according to claim 161 wherein said annotation module is further configured to index at least one said detected changes, index at least one said features comprising detected changes, or both.
169. The system according to claim 161, wherein said annotations comprising at least one selected from a group consisting of: said logged changes, said recognized features comprising changes, said changes information, indexing of said annotation content, comment(s) inserted by at least one user, link(s), images, changes history, and any combination thereof.
170. The system according to claim 161, wherein said annotation module is configured to edit at least one said annotation by performing at least one selected from a group consisting of: adding annotation, deleting annotation, changing annotation details, manipulating graphical representation of said annotation, editing annotation content, editing annotation location on screen, editing annotation location along timeline, edit annotation image, connecting said annotation(s) to at least one second feature, and any combination thereof.
171. The system according to claim 161, wherein said annotation module is further configured to allow a user by means of a user interface, to edit at least one of said annotations.
172. The system according to claim 161, wherein said annotation module is configured to generate differential graphical representation of annotations relating to their version source and/ or their place in said changes hierarchy.
173. The system according to claim 161, wherein said annotation module is configured to be updated following at least one event selected from a group consisting of: detecting a change in said DMF, editing of said annotation by at least one said user, and any combination thereof.
174. The system according to claim 161, wherein said annotation module is configured to generate a task list comprising one or more of said annotations.
175. The system according to claim 174, wherein said annotation module is configured to forward said task list to at least one third party.
176. The system according to claim 174, wherein said annotation module is configured to distribute a different set said annotations into different final reviewed digital media files (RDMFs) according to user defined criteria.
177. The system according to claim 161, wherein said annotation module is configured to index said annotations according to at least one selected from a group consisting of: user defined information, said changes information, said feature comprising change, DMF version, timing within DMF, said annotation generation date, said annotation editing date, said annotation tag associated by a user, and any combination thereof.
178. The system according to claim 161, wherein said processor is configured to search annotations according to at least one said annotation indexing.
179. The system according to claim 147, wherein said system is configured to generate a history database of said detected changes information in at least one DMF in comparison to at least one other DMF selected from a group consisting of: all received DMF versions, at least one second DMF, a user selected DMF, a most recently received DMF, said first received DMF, at least a portion of a selected DMF, and any combination thereof.
180. The system according to claim 147, wherein said processor is further configured to generate at least one DMF by combining at least a portion of different DMFs, combining at least one first and at least one second portion of the same DMF, extract at least one portion from said DMF, and any combination thereof.
181. The system according to claim 147, wherein said processor is further configured to generate at least one RDMF by: combining at least a portion of different RDMFs, combining at least one first and at least one second portion of the same RDMF, extract at least one portion from said RDMF, and any combination thereof.
182. The system according to claim 147, wherein said system further comprises at least one rendering module configured to render said RDMF according to user defined criteria.
183. The system according to claim 147, wherein said receiving module is configured to accept said DMF in a rendered form.
184. The system according to claim 147, wherein said receiving module is configured to accept said DMF in a compressed form and extract the rendered form.
PCT/IL2016/050626 2015-06-15 2016-06-15 A digital media reviewing system and methods thereof WO2016203469A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201562175454P 2015-06-15 2015-06-15
US62/175,454 2015-06-15

Publications (1)

Publication Number Publication Date
WO2016203469A1 true WO2016203469A1 (en) 2016-12-22

Family

ID=57545320

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IL2016/050626 WO2016203469A1 (en) 2015-06-15 2016-06-15 A digital media reviewing system and methods thereof

Country Status (1)

Country Link
WO (1) WO2016203469A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20230169465A1 (en) * 2004-11-08 2023-06-01 Open Text Corporation Systems and methods for management of networked collaboration
CN116744048A (en) * 2023-08-14 2023-09-12 杭州面朝信息科技有限公司 Online video marking method, system and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090070367A1 (en) * 2007-09-06 2009-03-12 Lenovo (Beijing) Limited Multi-version control method for data documents and device thereof
US20120272153A1 (en) * 2011-04-19 2012-10-25 Tovi Grossman Hierarchical display and navigation of document revision histories
US20130275312A1 (en) * 2012-04-12 2013-10-17 Avid Technology, Inc. Methods and systems for collaborative media creation
US20140281872A1 (en) * 2013-03-14 2014-09-18 Workshare, Ltd. System for Tracking Changes in a Collaborative Document Editing Environment

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090070367A1 (en) * 2007-09-06 2009-03-12 Lenovo (Beijing) Limited Multi-version control method for data documents and device thereof
US20120272153A1 (en) * 2011-04-19 2012-10-25 Tovi Grossman Hierarchical display and navigation of document revision histories
US20130275312A1 (en) * 2012-04-12 2013-10-17 Avid Technology, Inc. Methods and systems for collaborative media creation
US20140281872A1 (en) * 2013-03-14 2014-09-18 Workshare, Ltd. System for Tracking Changes in a Collaborative Document Editing Environment

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20230169465A1 (en) * 2004-11-08 2023-06-01 Open Text Corporation Systems and methods for management of networked collaboration
US11954646B2 (en) * 2004-11-08 2024-04-09 Open Text Corporation Systems and methods for management of networked collaboration
CN116744048A (en) * 2023-08-14 2023-09-12 杭州面朝信息科技有限公司 Online video marking method, system and storage medium
CN116744048B (en) * 2023-08-14 2023-12-15 杭州面朝互动科技有限公司 Online video marking method, system and storage medium

Similar Documents

Publication Publication Date Title
US10714145B2 (en) Systems and methods to associate multimedia tags with user comments and generate user modifiable snippets around a tag time for efficient storage and sharing of tagged items
JP6861454B2 (en) Storyboard instruction video production from shared and personalized assets
CN101300567B (en) Method for media sharing and authoring on the web
EP2132624B1 (en) Automatically generating audiovisual works
US8341525B1 (en) System and methods for collaborative online multimedia production
US8631047B2 (en) Editing 3D video
US20180308524A1 (en) System and method for preparing and capturing a video file embedded with an image file
KR20090093904A (en) Apparatus and method for scene variation robust multimedia image analysis, and system for multimedia editing based on objects
JPWO2008136466A1 (en) Movie editing device
WO2016203469A1 (en) A digital media reviewing system and methods thereof
CN104025465A (en) Logging events in media files including frame matching
US20230419997A1 (en) Automatic Non-Linear Editing Style Transfer
US10915715B2 (en) System and method for identifying and tagging assets within an AV file
Ronfard et al. OpenKinoAI: A Framework for Intelligent Cinematography and Editing of Live Performances
KR20200022995A (en) Content production system
KR20190060027A (en) Method and apparatus for editing video based on character emotions
US20150032718A1 (en) Method and system for searches in digital content
Lin et al. VideoMap: Supporting Video Exploration, Brainstorming, and Prototyping in the Latent Space
Duan et al. Meetor: A Human-Centered Automatic Video Editing System for Meeting Recordings
Sawada Recast: an interactive platform for personal media curation and distribution
Mateer et al. A vision-based postproduction tool for footage logging, analysis, and annotation
Hullfish Avid Uncut: Workflows, Tips, and Techniques from Hollywood Pros
Perkins Adobe Photoshop CS3 Extended for 3D and video

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16811136

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16811136

Country of ref document: EP

Kind code of ref document: A1