US20120311448A1 - System and methods for collaborative online multimedia production - Google Patents
System and methods for collaborative online multimedia production Download PDFInfo
- Publication number
- US20120311448A1 US20120311448A1 US13/283,575 US201113283575A US2012311448A1 US 20120311448 A1 US20120311448 A1 US 20120311448A1 US 201113283575 A US201113283575 A US 201113283575A US 2012311448 A1 US2012311448 A1 US 2012311448A1
- Authority
- US
- United States
- Prior art keywords
- video
- script
- project
- screenplay
- scenes
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims description 54
- 238000004519 manufacturing process Methods 0.000 title description 75
- 238000013515 script Methods 0.000 claims abstract description 151
- 230000009471 action Effects 0.000 claims description 18
- 230000004044 response Effects 0.000 claims description 7
- 230000033001 locomotion Effects 0.000 claims description 6
- 230000008569 process Effects 0.000 description 23
- 230000006870 function Effects 0.000 description 14
- 230000000694 effects Effects 0.000 description 11
- 238000012545 processing Methods 0.000 description 11
- 238000009826 distribution Methods 0.000 description 7
- 238000012552 review Methods 0.000 description 7
- 238000013475 authorization Methods 0.000 description 6
- 238000000605 extraction Methods 0.000 description 6
- 238000005266 casting Methods 0.000 description 5
- 238000005286 illumination Methods 0.000 description 5
- 239000000463 material Substances 0.000 description 5
- 230000007246 mechanism Effects 0.000 description 5
- 239000000047 product Substances 0.000 description 5
- 230000007704 transition Effects 0.000 description 5
- 230000000007 visual effect Effects 0.000 description 5
- 238000009877 rendering Methods 0.000 description 4
- 238000011161 development Methods 0.000 description 3
- 210000001699 lower leg Anatomy 0.000 description 3
- 238000012986 modification Methods 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 238000013459 approach Methods 0.000 description 2
- 239000003795 chemical substances by application Substances 0.000 description 2
- 238000004891 communication Methods 0.000 description 2
- 239000000284 extract Substances 0.000 description 2
- 238000010348 incorporation Methods 0.000 description 2
- 238000003780 insertion Methods 0.000 description 2
- 230000037431 insertion Effects 0.000 description 2
- 230000010354 integration Effects 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 238000007726 management method Methods 0.000 description 2
- 238000005192 partition Methods 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- 238000003860 storage Methods 0.000 description 2
- 230000009897 systematic effect Effects 0.000 description 2
- 238000012549 training Methods 0.000 description 2
- 238000009966 trimming Methods 0.000 description 2
- 238000009825 accumulation Methods 0.000 description 1
- 230000003213 activating effect Effects 0.000 description 1
- 238000007792 addition Methods 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 230000003111 delayed effect Effects 0.000 description 1
- 230000008921 facial expression Effects 0.000 description 1
- 239000012467 final product Substances 0.000 description 1
- 239000000796 flavoring agent Substances 0.000 description 1
- 235000019634 flavors Nutrition 0.000 description 1
- 239000003999 initiator Substances 0.000 description 1
- 230000000977 initiatory effect Effects 0.000 description 1
- 238000012804 iterative process Methods 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 230000006855 networking Effects 0.000 description 1
- 238000013439 planning Methods 0.000 description 1
- 230000035755 proliferation Effects 0.000 description 1
- 238000012163 sequencing technique Methods 0.000 description 1
- 238000001228 spectrum Methods 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
- 238000012384 transportation and delivery Methods 0.000 description 1
- VLCQZHSMCYCDJL-UHFFFAOYSA-N tribenuron methyl Chemical compound COC(=O)C1=CC=CC=C1S(=O)(=O)NC(=O)N(C)C1=NC(C)=NC(OC)=N1 VLCQZHSMCYCDJL-UHFFFAOYSA-N 0.000 description 1
- 230000001960 triggered effect Effects 0.000 description 1
- 230000003612 virological effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q10/00—Administration; Management
- G06Q10/10—Office automation; Time management
- G06Q10/101—Collaborative creation, e.g. joint development of products or services
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11B—INFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
- G11B27/00—Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
- G11B27/02—Editing, e.g. varying the order of information signals recorded on, or reproduced from, record carriers
- G11B27/031—Electronic editing of digitised analogue information signals, e.g. audio or video signals
- G11B27/034—Electronic editing of digitised analogue information signals, e.g. audio or video signals on discs
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11B—INFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
- G11B27/00—Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
- G11B27/10—Indexing; Addressing; Timing or synchronising; Measuring tape travel
- G11B27/34—Indicating arrangements
Definitions
- This disclosure is related to the field of collaborative online video production applications, and in particular, a multimedia system for video productions with embedded script and commands.
- mobile applications of this novel web application may be downloaded on mobile devices to notify users about a new or ongoing video production in their current geographical locations to upload specific videos, background screen, news shots, sounds, music, cover events, collaborative storytelling, and so forth. Or, users may initiate a production triggered by advantageous situations. For example, major news, social, or personal events in specific location will notify all or pre-selected users using such mobile app to collaborate on scripting, shooting, editing, and producing videos on the fly.
- FIG. 1 illustrates a Distributed Multimedia Production (DMP), according to some embodiments.
- DMP Distributed Multimedia Production
- FIG. 2 illustrates a table of various camera locations with respect to an actor's positions and the corresponding angles, scenes, layers, according to some embodiments.
- FIG. 3 illustrates an example of a hierarchy between application and user interfaces, according to some embodiments.
- FIG. 4 illustrates an example of various elements within a shot, according to some embodiments.
- FIG. 5 illustrates an example of a functional block within a main application page, according to some embodiments.
- FIG. 6 illustrates an example of various functional elements within a user's idea page, according to some embodiments.
- FIG. 7 illustrates an example of a functional block within a script page, according to some embodiments.
- FIG. 8 illustrates an example of a functional block within an Editor (or Director) page, according to some embodiments.
- FIG. 9 illustrates an example of a functional block within an actor page, according to some embodiments.
- FIG. 10 illustrates an implementation example of a script within a video editor, according to some embodiments.
- FIG. 11 illustrates an example of a file Uploader with Chroma keys to illuminate green or blue background color, according to some embodiments.
- FIG. 12 illustrates an example of a Filer Uploader assigning uploaded videos to target shots within an embedded script in a video editor, according to some embodiments.
- FIG. 13 illustrates a method for producing a multimedia project, according to our embodiments.
- FIGS. 14A and 14B illustrate a script writer tool and intake tool, according to an example embodiment.
- FIG. 15 illustrates a video editor tool, according to an example embodiment.
- FIG. 16 illustrates a mobile device display, according to an example embodiment.
- Presented herein is a novel platform that alleviates such requirements by opening up the video creation, production, and distribution process to a collaborative process.
- Such methods and applications may be used to democratize digital video processes and thus empower a whole new generation of artists, writers, content, and markets by exponentially increasing the number of professional and amateur video creators and industry players contributing to the whole video digital content and economy.
- online video production communities using this novel web application have interaction with script writers.
- the script is seamlessly embedded into the video editor to simplify the production process and balance production roles among users.
- diverse global user communities may be formed that include a variety of participants, such as students, writers, actors, cameramen, artists, filmmakers, musicians, educators, journalists, travelers, activists, sports enthusiasts, and bloggers.
- Such a novel production environment enables practically anyone who wants to create original video content.
- the script may encompass placeholders, command lines, and producer/editor comments to automatically upload videos captured by socially connected users into the pre-assigned slots in the video editor timeline to enable collaborative storytelling and make video production a social experience. These users may do so by using the App version on their mobile devices.
- Such novel platform creates aggregate value by offering an environment for collective efforts and collaboration instead of today's tiny and disconnected individual efforts or expensive and inflexible production studio styles.
- This “Community-Driven” web application also brings together amateur, professionals, and celebrities, where feedback or cameo appearances by celebrities and professionals may be the ultimate reward to amateur users.
- a mobile App has both client side portion and software on network servers which receive a plurality of video, audio, images, commands, text, and comments data streams from a plurality of mobile stations to produce videos on the fly or in a time delayed fashion. Users may select to keep copies of their own files on their mobile device. Unsophisticated users may configure their mobile App from a pre-selected menu to setup the complete or a portion of the simplified video production portal application from both the client and server sides depending on their roles in the production process. For instance, a football event may trigger a video project where users are scattered around the football field. Production owner uses the script tool to create scenes and shots using script tool, where scenes my represent the quarters in the game, introduction, summery, best plays, highlights, key players, and so forth. Actors are now cameraman using their mobile devices to follow the script. Mobile App will be configured based on their role and will allow them to simultaneously view video shots to interchange roles on the fly depending on game progress.
- a system 100 includes a Distributed Multimedia Production (DMP) platform 110 communicatively coupled to the Internet 120 and one or more databases, DB( 1 ), D( 2 ), . . . DB(N) 102 . These system elements may be distributed among a collection of servers located around the globe.
- the configuration of system 100 allows collaborative processing incorporating multiple distributed participants.
- the DMP 110 enables a new generation of socially-connected professionals and amateurs to collaborate on high-quality video productions. Participants are able to work together in the process of generating the video, as well as to make the resultant work available online and accessible to mobile devices.
- the collaborative and distributed type web applications described herein provide online tools to write scripts, add commands, shoot videos, edit, produce, market, and distribute quality videos in a systematic, flexible, seamless, and simple way so as to make each user's experience enjoyable, rewarding, and exciting.
- the DMP platform 100 is a collaborative web application having modules for compiling a composition, authorizing users, providing tools to users, and payment or subscription processing. Other modules may be added as a function of the capabilities and desires of the collaborators.
- the DMP platform may be implemented as a cloud service, where functions are provided as a networked solution.
- the DMP platform may be implemented as distributed modules, where software is downloaded or otherwise provided to the collaborators.
- the modules of DMP 110 include tools 116 which provide applications and utilities for each of the users. These tools 116 will typically include tools specific to the functions performed. In this way, tools 116 may include a set of tools for authors, a set of tools for videographers, a set of tools for editing, a set of tools for compilation, and other functions.
- the tools 116 may further provide access to other applications which are not included in the DMP 110 but are available through a networked connection, such as Internet 120 .
- participants are able to access external applications and tools, such as third party applications or Tools as a Service (TAS) applications, whereby, tools 116 may interface with Application Programming Interfaces (APIs) seamlessly. In this way, the participant may select the feature or application desired, and tools 116 will set up the connection automatically and allow access to the application.
- TAS Tools as a Service
- the tools may be provided as services or may be downloadable as widgets for use at the collaborators computing or mobile device.
- the tools 116 may further provide interfaces and APIs to the user for interfacing with external devices, such as cameras, lighting equipment, sound equipment, digitizing devices, website, other resources and software programs.
- the tools module 116 may further provide drivers for control of external devices and software desired for use on the collaborative project.
- the tools module 116 maintains these various mechanisms and works in cooperation with the other modules within DMP 110 , such as the authorization module 118 , compilation module 112 , and payment module 114 .
- the compilation module 112 allows users to build the multimedia work by compiling the various components generated and contributed by each of the collaborative users.
- the compilation module 112 processes uploaded files and video to allow fast online processing. For instance, characters, scenes, shots within scenes, commands, dialogues, actions, and comments are created and included during the script writing process to build videos initial structure. Such structure is automatically integrated into video editor timeline. Comments may be part of the shot metadata that users, specifically actors and cameraman, can input to describe building blocks of elements used to create the scene such as type of furniture, clothing, jewelry, accessories, and so forth, to enable viewers to select these items while watching the video to determine vendors selling these items online, in stores, or in nearby stores depending on user's location. This embedded advertising becomes part of the revenue models for this novel web application.
- high-quality videos are converted to low-resolution files during the upload process to enable users to edit them on the fly, green or blue background screens are automatically removed, and videos are trimmed to assign each trimmed video file to its corresponding slot in the video editor timeline
- the compiler renders the video to its original high quality resolution for online, broadcast, or cable distribution.
- Information included in the script such as characters, scenes, shots within scenes, commands, dialogues, actions, and comments, may be integrated with the video during the rendering process to provide keywords and descriptions that may be used to promote the video, associate relevant commercials and advertisement during viewing, and help search engines identify clips within the video.
- This data may be stored in a new format with the video data, or may be stored in a separate file mapped to the video data.
- a web application may include HTML and style sheet documents, which provide the graphics and look of the webpage, which are downloaded to users' drive and cached. It may also include text files, which are validated by the browser, such as XML, java, flash or other files.
- the authorization module 118 identifies users by identity, such as by roles or contribution, and applies rules for processing and enabling operations.
- the authorization module 118 assigns and monitors rights are based on a processing scheme. In some embodiments the processing scheme is predetermined prior to starting a collaborative project or work. In some embodiments the processing scheme may be dynamically modified by an administrator.
- the authorization module 118 works in coordination with the payments module 114 to bill participants and to verify payment for each collaborative process according to the processing scheme.
- the payments may be based on collaboration parameters, such as by data content or by time used. Further, the payment module may enable a profit-sharing or other arrangement.
- the payments module 114 provides the payment status information to the authorization module 118 ; in response, the authorization module 118 may enable or prohibit users with respect to the various functions of the DMP 110 .
- the DMP 110 may be further offered as a cloud service, such as Software as a Service (SAS).
- SAS Software as a Service
- the DMP 110 platform may upgrade the various modules without interruption or action by the users.
- the collaboration of users is then facilitated through the cloud service(s), enabling collaborators to work together asynchronously but with the most recent versions and information.
- the cloud service may access other information available through the Internet, and may also access proprietary databases of the collaborators.
- the service is provided as a platform or application in the cloud, the service is then available for easy access from any device having capability to access the Internet or network.
- the ability to collaborate from anywhere provides users with enhanced flexibility. Similarly, multiple users may decide to collaborate in real-time on complex scenes, layered storyline, or live feeds.
- the DMP 110 may be used for Internet productions and publications, such as video and TV applications available at sites on the web.
- the DMP 110 is configured for use and communication with Internet protocols.
- the DMP 110 may post or publish video content and monitor its use and viewing statistics. This information may be used as feedback in further development of a given project or as survey type information for future projects.
- the DMP 110 may be used to create casting calls or review screen play snippets. This may extend to film festivals for coordination and planning of events.
- Individual films may be created on or provided to the DMP 110 , for review, scheduling and selection by a film review committee.
- the reviewers could provide critique and edits to the film, having ability to manipulate scene information. This is available as the project is configurable by the DMP 110 .
- a DMP 110 eliminates costly tools, equipment and royalties by providing or recommending video capture kits with camera, microphone, green screen, lights, and so forth, as well as providing royalty free stock footage and soundtracks.
- the DMP 110 enables asynchronous shots taped by actors to be assembled into a single shot within a scene, in accordance to script information, to provide streamlined production processes.
- the production processes provides simple writing tools which expands an idea to a detailed screenplay.
- the DPM 110 provides powerful editing tools to layer video elements, incorporate and modify video and audio elements, title and subscript scenes, add effects and transitions into a high-quality video production. Similarly, multiple users may decide to collaborate in real-time on complex scenes, layered storyline, or live feeds.
- social networking tools allow writers, producers, actors, cameramen, and artists to collaborate and share work at any stage using a computing or mobile device.
- a collaborative platform may be used to create videos including short videos of offbeat comedy skits, spoofs, training videos, commercials, infomercials, documentaries, full length movies. In some examples these collaborations may produce videos of short duration, less than ten minutes, or long durations.
- the collaborative platform accommodates multiple contributors.
- a producer, writers, editors, actors, cameramen, artists, musicians, sound engineers, and others may all participate and contribute at different stages of the video production.
- the roles of the participants may include producers, writers, actors, cameramen, engineers, editors, and so forth.
- a producer is an authenticated owner of a particular production having ultimate control over its metadata, access rights, scene releases and credits.
- the producer may post a call for writers, actors, cameramen, or others for the project.
- the producer selects and authenticates writers, actors and other participants.
- Writers are authenticated users granted access to a page for editing the script, referred to as the Edit Script page, for a particular scene or all scenes in a production.
- the writers may have a partition that allows them to collaborate among themselves prior to posting their writings for viewing, critique, and learning by others. Once the writings are so posted, an editor or producer will review, comment and revise the writings.
- Script may include characters, scenes, shots within scenes, commands, dialogues, actions, and comments.
- An editor is an authenticated user granted access to a page for editing the video, referred to as the Edit Video page, for a particular scene or all scenes in a production.
- the actors then act out the writings, or script; the actors are authenticated users having a defined character role in a particular scene and therefore are granted access to a page to upload clips, referred to as the Upload Clip page, for that scene.
- Actors may include celebrities providing cameos which may be integrated into the video project.
- An artist is an authenticated user given the task to generate background images and videos for given scenes when directors/editors cannot identify suitable ones in the application database.
- Engineers and musicians are authenticated users given the task to generate sound effects, video effects and music for given scenes when directors/editors cannot identify suitable ones in the application database. Administrators are DMP personnel having access to certain editorial functions. Super Administrators are DMP technical personnel having access to user accounts and low-level functions, as well as having control to configure the DMP according to a processing scheme.
- the access rights include:
- the DMP 110 supports a variety of processing functions, some of these are detailed below according to an example embodiment.
- Lighting settings may be set in a similar way without being the same as actor/camera settings.
- FIG. 2 illustrates a table depicting an actor's positions and angles with respect to their own camera/green screen and to each other. Such guidelines may be integrated with the script to facilitate the video production process.
- a script writer may include additional fields to enable seamless integration with video editor and to allow actors to easily determine how to shoot and time their videos.
- FIG. 2 illustrates an example scenario of a frame 200 having multiple fields.
- a timeline track displaying information from the script alongside the actual clips being tied together may be used as a control, but moves in tandem with the actual time line content as it's zoomed and scrolled (like the Ruler control).
- an editing panel may appear when a shot clip is selected in the timeline, offering the following elements:
- the collaborative online video production application and its associated payment stream models are based on the application ecosystem ranging from the collaborative environment, video content, talented users, target audiences, to partners.
- the payments module 114 calculates fees for accessing talents promoted by the application. Access may be by internal or external users/consumers. For instance, a producer may want to hire a video editor, script writer and actors to manifest their vision for a production.
- the payments module 114 may further incorporate a payment transaction charge as a flat rate, one-time payment, royalties, or a full license to the application. Subscriptions may be implemented to provide different rates to groups and video production channels of relevance to the consumer.
- a reward program may be implemented by ranking videos and types of users.
- a reward program may consist or linked to points collected by users depending on their contributions and or revenue generated by their videos.
- the DMP 110 may be used for engagement and interactivity with the audience, such as fans, sponsors, partner, and so forth.
- the system 100 further allows for partnerships with third party distributors, vendors, and services.
- the DPM further provides expanded access to royalty-free stock content library and to effects, transitions, themes and so forth.
- Some embodiments implement transaction fees for payment transfers between accounts.
- Advertising may be displayed on the DMP site and in correspondence, with the ability to block ads on the site and in correspondence. Advertising returns may be applied by the payments module 110 where site content is displayed or otherwise used on third party sites and services, and wherein the ability to retain or regain ownership of user content is provided through the DMP 110 . Further, the DMP 110 may be used to account for and process hosting fees for podcast channels
- the following describes a video production system 200 , illustrated in FIG. 3 , which distributes video production so as to satisfy requirements of collaboration among script writers, producers, actors, cameramen, stage artists, and musician are scattered all around the globe and may be unaware of each other's presence.
- an online produCtion of distributed multimedia tool referred to as a CRU or CRU tool, alleviates many of the video production challenges by opening up the video creation, production, and distribution process to a group of users, and may even open the process to the general public.
- the CRU tool democratizes the digital video process to empower a whole new generation of artists, writers, contents, and markets by exponentially increasing the number of professional and amateur video creators and players contributing to the whole digital video content and economy.
- the CRU platform 212 includes a variety of elements and functionalities. As illustrated in FIG. 3 , the system 200 includes multiple CRUs 212 , 214 , 216 , each coupled to multiple environments. The CRUs are coupled to environments including a viewer interface 218 , customer interface 220 , and advertiser interface 222 . The CRUs are further coupled to a production environment including a variety of elements and functions.
- Script Dicer module 208 One production function is referred to as the Script Dicer module 208 , which enables script writers to enter their scenes, lines, storyline in a creative, collaborative way to enable actors and producers/directors to seamlessly assemble the video.
- Such script dicing includes, but is not limited to, tagging/linking each scene, actor line, location, time.
- kits may be provided for under a variety of scenarios, including for fee or as part of a complementary software development kit.
- kit may include a green/blue/unicolor background screen, microphone, video capturing camera, and/or an illumination light source.
- actors may be given guidance on how to position the camera and illumination source.
- the actor toolkit may include a driver to seamlessly interface with the CRU cloud.
- the Producer/Director Control module 206 functionality component of the CRU platform enables a producer/director to integrate all video elements by making associating actors, cameramen, background video or images, and music to each scene before final editing and production.
- music module 210 that enables a musician to upload, create, and edit the soundtrack that is suitable to video scenes. It also includes a database of music tracks from which to select. Such music tracks may be labeled/tagged, and are not necessarily limited, by type, instrument, length, modularity, genre, and so forth.
- the Background/Stage module 202 which enables photographers, cameramen, artists or amateurs to upload, create, and edit static, animated, or videos suitable for scene background. It also includes a database of such material from which to select, such as when a unique background is not desired. Such background images/videos are labeled/tagged, but not necessarily limited, by type, day time, size, duration (such as for videos), modularity, and genre. Many factors are considered when combining actors' videos with background scenes to homogenize the video. For instance, lighting and camera angle are some factors that are typically taken into consideration during selection and integration process.
- the system 200 allows artists and amateurs to upload their images and videos using different angles or 360 degree viewing capabilities as it is the case of three dimensional maps.
- the system 200 brings the collaborative video making experience to multiple people without requiring them to go through years of education and experience to penetrate such industry and create new industries based on the creativity and free exploration CRU users enjoy on an individual basis or collectively.
- a master and slave node hierarchy is used to balance control between online users.
- a master user has the responsibility to invite participants, assign roles, and oversee content capturing and production processes. Each user is able to see all contents generated by users in real-time or archived, but only master note is capable of activating a subset of users to interact on given scenes of the video.
- a set of tools may include a green/blue background, video/audio capturing mean such as video camcorders, software interface an drivers.
- the user interface is a Graphic User Interface (GUI) and hardware interfaces which are linked to the CRU.
- GUI Graphic User Interface
- FIG. 4 illustrates the various video elements according to one embodiment, where variables represent the parameters and features of the video.
- the video elements are specified by field structure 300 including background field 302 , music selection field 304 , and user feed fields 306 , 308 , 310 .
- the Tb/Tm represent the type of background and music; the Gb/Gm represent the genre of the background and music; Db represents anytime of background music; Im represents instruments; Lb/Lm represent duration of background and music; rj represents position; tj represents a time stamp; Aj represents angle and illumination of jth actor.
- the type of background is identified by the variable Tb. Types of backgrounds include static images such different angles of office or restaurant areas, or video background such as moving car or beach scenes.
- the type of music is represented by the variable Tm such as suspense, cheerful, sad, or sound effects.
- the genre, represented by the variable G may include comedy, drama, horror, action, documentary, newsfeed, storytelling, sports, social, or kids.
- the instrument(s) used in the audio are represented by I.
- the duration of the background scene or music is represented by the variable L.
- the position of the actor within a shot is represented as r j ′; and the time stamp is represented as t j .
- the angle and illumination of a jth actor, with respect to a reference is represented as A j . This scenario enables multiple users and allows these users to upload video files.
- Editing, integrating, and rendering online video may be accomplished by reducing video quality during the upload process, using distributed servers that process and run specific or general editing, integrating, and rendering requests to recover original video quality.
- a CRU video editor includes a unique feature that dynamically adapt the video capturing and illumination angles of the different videos that will be eventually combined to create the final scene.
- Any user can initiate the video creation process, such as an amateur who can simply post their simple ideas. Such posting may also initiate an alert signal or message to script writers, directors, actors, cameramen, and musicians (other participants) interested in similar ideas to further advance the collaborative video process.
- Industry players looking to create commercials for their products can use CRU to create competition among users to create winner commercial.
- Advertisers of products and services having a relationship to a particular video theme or genre, or desiring to make a connection with a particular audience, are able to advertise their products or services, and act as participants. By incorporating an advertising function provides a revenue stream for video producers.
- the CRU platform may be provided as a free service to all users at all levels.
- users may search certain levels, such as actor, script writer, musician, director level after they achieve a particular goal.
- the goal may reflect successful accumulation of a number of points. This may be based on the number of released videos from a given user's contributions.
- a CRU participant may advertise a video project on the social network(s), where their interest graph identifies potential participants.
- Social networks may also be used to advertise the video after completion.
- the CRU may incorporate its own video distribution channels as well as conventional hooks to social media.
- the CRU engine keeps track of CRU videos activities and revenue regardless of where they reside.
- a CRU system may include an internal system to enable CRU users to monetize their contributions and develop a reputation within the CRU community. This will attract others and create groups of users active in the video production business.
- FIG. 5 illustrates the functional building blocks of a Graphical User Interface (GUI) expressed in a home page 500 .
- GUI Graphical User Interface
- the home page presents a variety of different functionalities for users.
- a user may share an idea to solicit interest from script writers, Directors, Actors, cameramen, musicians, sound effect, visual effects, or background scenes and videos.
- a user may further insert a screenplay script manually or dynamically by uploading script files.
- the user may select a role, as in selection box 502 .
- a user may select a role as a director, actors, cameramen, sound engineer, score composer, or music content creator, or artist creating visual effects, background images, videos, and so forth.
- a director allocates roles based on the script and has the right to modify the script at his leisure while notifying other project members, who also may provide their inputs to the script for the director's review and acceptance or rejection.
- the director receives the modifications and additions, but has the right to modify the script so as to avoid simultaneous or conflicting changes.
- Each actor may have multiple insertion points in a script in a given film.
- the script may include lines that will be eventually filled by the actor during video shoots. For example, a director may decide to shoot the same scene using different angles or facial expressions and then decide which ones to use during editing.
- a database of information may include multiple partitions, and is used for storing ideas, scripts, names of directors and actors, cameramen, sounds, visuals, which are selected in the : this will be transparent to users as some of them will have access to view and utilize other projects contents, such as after paying a higher subscription fee. This fee may be shared with other users who produced these videos. These elements are accessed by selection of the database selection box 504 .
- the homepage GUI 500 On the homepage GUI 500 , users can view cool videos, and then may be encouraged to either register or login to learn about how these videos are created.
- the homepage 500 further includes tools, accessed through a tool selection box 508 .
- the tools are for development, editing, effects editing, publishing, and so forth.
- Casting agents may also be given the opportunity to register and login to view the actor's audition videos and are encouraged to give feedback. Casting agents interested in communicating directly with actors may be asked to pay a fee to access such a service. Such payment scheme may assign fees to be collected when actors purchase their video kits.
- the kits may be part of the tools, and provided as a development tool kit.
- the video creation process is presented in a linear fashion, where the users may follow a plan to build the video, or participants may add their portions asynchronously, allowing the video to develop through an iterative process.
- FIG. 6 illustrates a user GUI 600 for inputting an idea for a project.
- the selection box 602 identifies what type of project to create, whether it is a new project, or continuing an existing one.
- the user may also select from archived elements to configure a team to build a project.
- FIG. 7 illustrates a user GUI 700 for inputting script information.
- the script input box 702 may be an area where the user identifies the script specifics, or may be an area to upload a script created off-line.
- the script may be identified by standard or agreed upon format.
- FIG. 8 illustrates a user GUI 800 for inputting director's instructions, guidance and notes.
- the director creates a group of insertion points.
- the director's GUI 800 identifies a group Sn A 1 Tn 1 that refers to actor A 1 in nth scene ore shot at time start Tn 1 .
- the group is a collection of points that the director has created that will be filled by video of actors which may be filmed later. Actors and other project members can view the project at any time but may have no rights to modify contents except for their own contributions.
- the director acts as the master participant and has higher authority and control than other participants. Master control portions 802 identify those areas that are used to implement the director's decisions.
- the director will specify the particular components for each scene, as well as the participants and their roles. Director and editor roles may be identical in this novel online application.
- FIG. 9 illustrates a user GUI 900 where the actors may respond to control instructions of the director.
- the actor acts as a slave to the directions of the director.
- the slave control portions 902 identify those areas that actors use to implement the director's instructions.
- Actors use the recommended video kit elements to record the different videos assigned by director.
- the GUI will guide them on where to position the light, camera, and other items such as fan, eyeglasses, item in hand.
- the user GUI 900 presents options to the actor to select a scene, sounds, and so forth. This illustrates the slave mode of the system, which allows actions in response to the master. For voice over functions the user may play the video and add the voice when appropriate, such as in an animated project.
- a video project is put together to illustrate multiple video portions and application of the audio portions.
- Textual information may be provided to instruct actors are other participants.
- An “auto” button may be checked to allow the video editor to automatically adapt to uploaded video durations.
- Various fields may be used to label each shot as part of a given scene. Adjustment controls, such as horizontal, vertical, and deep sliders may be used to provide actors with desired shooting angles. In the present example, the camera is shooting horizontally from ⁇ 14 degree angle and the shot duration is set to 1.8 seconds.
- the script dialogue, shots, and actions are embedded into the video editor.
- FIG. 11 illustrates one example of a video file uploader 1100 equipped with functionality to modify a green screen or blue screen.
- the uploader 1100 includes various sliders and adjustment mechanisms.
- the uploader 1100 may be used to remove the green or blue background of an uploaded video.
- a user may drag a color square along the green shade spectrum or along the video itself, and in this way, reduce, eliminate or adjust the green/blue color of the background.
- the video file uploader 1100 is adapted to upload a user's files where general files may be uploaded to video production general folders. These general files are added to web application general database.
- a user selects destination of files associated with a shot to include them in the corresponding script section in video timeline.
- a user is able to remove green and blue backgrounds of uploaded videos.
- a user is able to trim videos to comply with the script, and adjust according to a timeline.
- a user is able to edit videos and may upload video file that includes multiple shots while indicating start and end times of each shot or scene.
- FIG. 12 illustrates the uploader GUI 1200 for storing the video.
- the uploader 1200 provides users with the scenes, shots, and character selection.
- the uploader GUI 1200 may include a variety of configurations, such as identifying the timing on a timeline for placement of the uploaded content.
- FIG. 13 illustrates a method for generating a multimedia project according to an example embodiment.
- the method 1300 starts with a new video idea or master uploading or entering a script, operation 1302 .
- the uploading may be done by a user initiating a project or may be in response to a request received from a director or other project initiator or owner.
- the script is uploaded, the system works to extract script component information and apply this information to a timeline for the project.
- the extracted or entered components may include scenes, shots within scenes, commands, comments, characters, characters assignments, dialogue and character lines.
- the participant may then send out invitations to potential or desired participants, operation 1304 .
- the invitations may be posted on a designated website, may be sent out to individuals through email, social network, professional network or other community communication network.
- These invitations may be to fill specific roles, such as characters, and may also be for technical editors, video editors, script writers, photographers, and other roles needed for collaboration on the project.
- Responses are received through the system, operation 1406 .
- the participant may then select other participants from the responses received, operation 1308 .
- the participant may request further information, similar to auditions, so as to complete the selection process.
- the process then assigns roles, operation 1310 .
- the collaboration is then incorporated into the multimedia production environment.
- the script is effectively overlaid on a timeline and characters per scene are placed at the time when their action occurs. This allows collaborators to add their contributions to the correct position in the project.
- the script is tagged and the components each have a unique identifier.
- the systems tags these so that they are seamlessly added to the project. In this way, video for a given scene or corresponding shots is uploaded and mapped into the project at the correct slot in video editor timeline.
- the user merely posts the contribution to the project and the system reads the contribution tags and incorporates according to the tag. Tagging allows the system to automatically perform steps that were done manual in previous systems and solutions. This allows the system to incorporate script components into the video production environment or other multimedia production environment.
- a video production web application incorporates a collaborative environment providing invitations to participants, similar to a call for papers or review in an academic setting.
- the invitation may be provided to a designated group or to a general public audience.
- the master initiates a session by uploading or incorporating a script to the system, thus triggering an invitation mechanism to invite users to participate in video application.
- the script may include characters, scenes, shots within scenes, commands, dialogues, actions, and comments.
- the invitation is sent to potential participants. This may involve sending an email to a user account, or to a social media contact.
- the invitation is posted for review and acceptance by multiple potential participants, such as posting on a social media site.
- a director may assign the producer role to a video production owner, who then selects crew from respondents. The producer then assigns roles to individual participants selected from the respondents. If there are no satisfactory respondents, then the producer or master may send out a specific invitation to one or more desired participants to fill a role.
- FIG. 14A illustrates a script writer tool 1400 that includes modules for script file storage 1402 , script component extraction and mapping module 1404 , character selection and role assignment 1406 , instructions and settings 1408 , timeline incorporation of the script components 1410 , and editing the script 1412 .
- the character selection includes both the original character creation as well as assignment of that character to a participant.
- the scenes may be a collection of video shots, or master created scenes. The scene may specify the background, descriptions and flavor of the scene. Technical directions may include the shots to take for a given scene and sequence, as well as camera angles, lighting specifics, and so forth.
- the script writer tool 1400 allows the master and other participants to add commands and comments to the various scenes, characters and other instructions. Authenticated users may access the script in a file format.
- the script writer tool 1400 is used to create, edit, and modify the components of a script, such as action, command, and dialogue.
- the action describes the scene and motions
- the command provides further instructions
- the dialogue provides the lines the characters speak.
- the dialogue is provided on the scene for adding in audio after filming, such as karaoke videos.
- the script writer tool 1400 enables the script writer to format according to multiple aspects, such as to adjust the typeface/font, line spacing and type area, language, as well as to specify the page per minute of screen time. This enables the script writer to adjust the script according to venue, such as for an American or European movie.
- the script writer may further edit according to prose, such as to focus on audible and visual elements.
- the prose selected by the script writer will provide explanations for the participants.
- the script writer tool 1400 may further include a storyboarding module to enable the script writer to develop a story line which can be translated into the final video scenes.
- the storyboard module may start with an editable template that enables the user to quickly build a story line, such as to have drag and drop features, people, actions, and scenes.
- the storyboard module may be useful in creating an animated portion of a movie or an entire animated movie.
- the script writer tool 1400 includes a digital rights management module, 1420 which may incorporate multiple modules.
- a first module may be used to verify the material incorporated into the script is not infringing the copyrighted material of others, such as to compare to a database external to the script writer tool 1400 .
- a second module may be used to apply a Digital Rights Management (DRM) security mechanism, such as encryption or other means.
- DRM Digital Rights Management
- the script file storage unit 1402 stores the script created and uploaded by a writer, director or other with privileges allowing inputs to the script.
- the script file may be edited by multiple authorized collaborators.
- Each script includes a variety of components, such as characters, scenes, actions, background, music and audio information and so forth.
- the script component extraction module 1404 identifies these components in the script file and uses this information to identify the roles that will be used to prepare the video film project. For example, the script component extraction module 1404 identifies a character, and then enables the director or casting director to select a collaborator to fill this role.
- the selected collaborator, or actor is then given privileges which allow the collaborator to access the script, the characters lines, definition and actions, as well as to upload their contributions. In this example, the actor's contribution may be a video of the actor acting out their lines.
- the script component extraction module 1404 identifies the time when the actor's lines are to occur in the video project.
- the script component extraction module 1404 creates various files for the components of the script file. These files are then used to compile the contributions of the various contributors into a final product.
- the script component extraction module 1404 works in coordination with the timeline incorporation module 1410 , which receives the contributions of the collaborators and incorporates them into the timeline. In this way the script provides the plan for the video project.
- the components include characters, instructions, settings, and definitions, wherein the collaborators use the components to create their contributions. The received contributions are then implemented into the video project.
- the script writer tool 1400 enables collaborators to edit the script, when the collaborator has editing privileges.
- the editing module 1412 enables such editing of the script file.
- the collaborator edits are identified as changes to the script.
- the director may accept or reject the edits.
- the edits may be presented to multiple collaborators for group acceptance and discussion. Once accepted, the edits become part of the script.
- FIG. 14B illustrates an embodiment of a script intake module 1450 which receives the script creations and components from the script writer tool 1400 and extracts information from the script for distribution throughout the collaborators. This enables each participant to provide their portion of the movie while understanding the context and other components of the production.
- the script intake module 1450 includes a script component extractor 1452 and a dialogue extractor 1454 , which extracts the characters and dialogue from the script. These components are stored accordingly and role assignments are applied. For example, a main character is associated with that character's scenes and lines in the script. The participant selected as the main character will be authorized to access this information. The actor will further be able to upload their video and audio portions.
- a module 1456 applies the timing overlay to the script, by coordinating the script to the timeline.
- the script intake module 1450 further distributes the script components, such as lines, timing, technical features, and so forth to the collaborators.
- FIG. 15 illustrates a video editor 1500 , having modules for script and related information 1502 , image, video and audio file handling 1504 , editing tools 1506 , timeline editor 1508 , and video viewing window 1510 .
- a user may select a scene from the video production page to edit shots and to assemble the scene. The user may add transitions between scenes. The final video is rendered to its original video quality after all scenes are successfully assembled.
- An optional film stock module 1520 may be included to access film stock available either freely or for fee. Such film stock may be incorporated into the movie.
- FIG. 16 illustrates a mobile device display screen 1600 .
- the mobile application for the collaborative video production product provides a video display portion 1602 , timeline portion 1604 which corresponds to the video displayed, and a control portion 1606 .
- the control portion 1606 may include a variety of controls, from drag and drop instructions that allow the user to edit the video by dragging control elements to the video to social network interfaces that allow sharing of the video editing real time.
- the video or multimedia project is displayed for multiple users in real time.
- the collaborators may discuss the video using their mobile devices, or one or more collaborators may be using their PC or other computing or mobile device.
- a user having a mobile e-reader may send script or other information to other users from the e-reader.
- Some mobile devices have capability to perform readability and other statistical calculations, which may be performed on the video project and then provided as feedback to other users.
- Still other embodiments may provide analysis and use information which may be used to refine the video project, or to identify advertisers.
- the collaborators access a third party service which identifies images in the video project and match these to brands and products. This information may be used to procure advertising revenue from these companies.
- the mobile application may connect to social media allowing easy upload and presentation on Internet and other applications. The collaborators may solicit feedback and suggestions from viewers to refine and improve the video.
- the mobile application may store the video project and associated data in a cloud environment, where multiple collaborators can access and edit the project.
- templates are provided on which a multimedia production may be built, such as for a horror movie, the various scene selections may be provided as well as character information, scary voices and noises, as well as links to information on this genre. Users may also build templates, such as for a series of movies or productions with a common theme, such as a science series. Educators may use the collaborative system to build projects with students, where the educator enters the script information, which may narrative or text book scripting such as for a documentary, and students access this information and add to the project. The end result is a multimedia presentation illustrating concepts learned.
- Sports casters may use such a system to incorporate footage taken by local photographers and incorporate into nationwide or worldwide video feeds, and other projects.
- the sports caster provides a script identifying the information desired for the video or sports cast, identifying specific views from specific locations, footage of specific teams and players, and so forth, wherein the sports casters send out a request of participants.
- the sports casters send out a request of participants.
- the sports caster does not have to go through the videos to manually position in the film, but they are already marked according to their location on the timeline according to the script. The editor then merely watches the films to select the one desired.
- Actor Kit Vendors such as companies selling Video camcorders, green/blue screens, external microphones, and Lighting, may use the collaborative system to enable sale of their goods. Advertisers may advertise on the system for consumer goods, media sites, movie & TV releases, and events, specifically targeted at the video creators and may advertise on the resultant video. Service Providers may include talent agencies, talent coaches, art schools & programs. Industry Productions may create commercial videos, host best video competitions, as well as to provide advertisements, announcements, tutorials, training materials, news feeds, and travel videos. Cable networks may license such application to produce its video ads and content.
Abstract
Description
- This application claims priority from the U.S. Provisional Patent Applications listed below:
- 1. U.S. Provisional Application Ser. No. 61/493,173, filed on 3 Jun. 2011, entitled System and Methods for Distributed Multimedia Production, Maha Achour and Samy Achour, inventors; and
- 2. U.S. Provisional Application Ser. No. 61/498,944, filed on 20 Jun. 2011, entitled Systems and Methods for Distributed Multimedia Production, Maha Achour and Samy Achour, inventors.
- 3. U.S. Provisional Application Ser. No. 61/514,446, filed on 02 Aug. 2011, entitled System and Methods for Collaborative Online Multimedia Production, Maha Achour and Doug Anarino, inventors.
- 4. U.S. Provisional Application Ser. No. 61/626,654, filed on 30 Sep. 2011, entitled System and Methods for Collaborative Online Multimedia Production, Maha Achour and Doug Anarino, inventors.
- All of the above-listed patent documents are incorporated herein by reference in their entireties, including figures, tables, claims, and all other matter filed or incorporated by reference in them.
- This disclosure is related to the field of collaborative online video production applications, and in particular, a multimedia system for video productions with embedded script and commands.
- Many of today's multimedia tasks are performed using audiovisual capturing tools to generate content that is then fed to expensive and sophisticated centralized editing and composing systems for titling, sequencing, super-positioning, effects generation and rendering before final release. Such a centralized approach discourages distributed multimedia production techniques and do not facilitate content feeds generated by professional and amateur entertainers, artists, media creators, and producers distributed across the globe. This is particularly the case with current video production systems where the script is a manuscript separate from the video creation process.
- By using conventional video editors to implement an online video production application, the production team tasks are not balanced among users as the editor bears the most challenging and time-consuming tasks. Additionally, the production crew still needs to be present during video shoots. For instance, editors typically perform a variety of tasks in processing videos uploaded by crew members, including, but not limited to, (i) remove the green or blue screen and smooth the edges trim the video and adjust the video length in compliance with the script and/or producer/editor requests; and (iii) identify each video and associate it with its corresponding scene or shot within the video editor timeline.
- With the emergence of online video content distributions, many amateur artists have attempted to produce their own videos using hardware and software tools available to them. Such approaches not only require having access to these systems and learning how to use them but also require that all video elements—from actors and background setup to sound and effects—be present in the same location and at the same time. Such stringent requirements are difficult to accommodate when scriptwriters, producers, actors, cameramen, stage artists, and musicians are working asynchronously wherever they happen to be at the time. Hence, there is a need for a systematic mechanism by which videos are seamlessly placed directly in the video editor timeline after removing the green and/or blue backgrounds. Similarly, multiple users may decide to collaborate in real-time on complex scenes, layered storyline, or live feeds. Furthermore, mobile applications of this novel web application (App) may be downloaded on mobile devices to notify users about a new or ongoing video production in their current geographical locations to upload specific videos, background screen, news shots, sounds, music, cover events, collaborative storytelling, and so forth. Or, users may initiate a production triggered by advantageous situations. For example, major news, social, or personal events in specific location will notify all or pre-selected users using such mobile app to collaborate on scripting, shooting, editing, and producing videos on the fly.
-
FIG. 1 illustrates a Distributed Multimedia Production (DMP), according to some embodiments. -
FIG. 2 illustrates a table of various camera locations with respect to an actor's positions and the corresponding angles, scenes, layers, according to some embodiments. -
FIG. 3 illustrates an example of a hierarchy between application and user interfaces, according to some embodiments. -
FIG. 4 illustrates an example of various elements within a shot, according to some embodiments. -
FIG. 5 illustrates an example of a functional block within a main application page, according to some embodiments. -
FIG. 6 illustrates an example of various functional elements within a user's idea page, according to some embodiments. -
FIG. 7 illustrates an example of a functional block within a script page, according to some embodiments. -
FIG. 8 illustrates an example of a functional block within an Editor (or Director) page, according to some embodiments. -
FIG. 9 illustrates an example of a functional block within an actor page, according to some embodiments. -
FIG. 10 illustrates an implementation example of a script within a video editor, according to some embodiments. -
FIG. 11 illustrates an example of a file Uploader with Chroma keys to illuminate green or blue background color, according to some embodiments. -
FIG. 12 illustrates an example of a Filer Uploader assigning uploaded videos to target shots within an embedded script in a video editor, according to some embodiments. -
FIG. 13 illustrates a method for producing a multimedia project, according to our embodiments. -
FIGS. 14A and 14B illustrate a script writer tool and intake tool, according to an example embodiment. -
FIG. 15 illustrates a video editor tool, according to an example embodiment. -
FIG. 16 illustrates a mobile device display, according to an example embodiment. - Presented herein is a novel platform that alleviates such requirements by opening up the video creation, production, and distribution process to a collaborative process. Such methods and applications may be used to democratize digital video processes and thus empower a whole new generation of artists, writers, content, and markets by exponentially increasing the number of professional and amateur video creators and industry players contributing to the whole video digital content and economy. Unlike conventional online video editors, online video production communities using this novel web application have interaction with script writers. Hence, the script is seamlessly embedded into the video editor to simplify the production process and balance production roles among users. Eventually, diverse global user communities may be formed that include a variety of participants, such as students, writers, actors, cameramen, artists, filmmakers, musicians, educators, journalists, travelers, activists, sports enthusiasts, and bloggers. Such a novel production environment enables practically anyone who wants to create original video content. Furthermore, the script may encompass placeholders, command lines, and producer/editor comments to automatically upload videos captured by socially connected users into the pre-assigned slots in the video editor timeline to enable collaborative storytelling and make video production a social experience. These users may do so by using the App version on their mobile devices. Such novel platform creates aggregate value by offering an environment for collective efforts and collaboration instead of today's tiny and disconnected individual efforts or expensive and inflexible production studio styles. This “Community-Driven” web application also brings together amateur, professionals, and celebrities, where feedback or cameo appearances by celebrities and professionals may be the ultimate reward to amateur users. A mobile App has both client side portion and software on network servers which receive a plurality of video, audio, images, commands, text, and comments data streams from a plurality of mobile stations to produce videos on the fly or in a time delayed fashion. Users may select to keep copies of their own files on their mobile device. Unsophisticated users may configure their mobile App from a pre-selected menu to setup the complete or a portion of the simplified video production portal application from both the client and server sides depending on their roles in the production process. For instance, a football event may trigger a video project where users are scattered around the football field. Production owner uses the script tool to create scenes and shots using script tool, where scenes my represent the quarters in the game, introduction, summery, best plays, highlights, key players, and so forth. Actors are now cameraman using their mobile devices to follow the script. Mobile App will be configured based on their role and will allow them to simultaneously view video shots to interchange roles on the fly depending on game progress.
- In some embodiments, a
system 100, as illustrated inFIG. 1 , includes a Distributed Multimedia Production (DMP)platform 110 communicatively coupled to theInternet 120 and one or more databases, DB(1), D(2), . . . DB(N) 102. These system elements may be distributed among a collection of servers located around the globe. The configuration ofsystem 100 allows collaborative processing incorporating multiple distributed participants. TheDMP 110 enables a new generation of socially-connected professionals and amateurs to collaborate on high-quality video productions. Participants are able to work together in the process of generating the video, as well as to make the resultant work available online and accessible to mobile devices. The collaborative and distributed type web applications described herein provide online tools to write scripts, add commands, shoot videos, edit, produce, market, and distribute quality videos in a systematic, flexible, seamless, and simple way so as to make each user's experience enjoyable, rewarding, and exciting. - In one example the
DMP platform 100 is a collaborative web application having modules for compiling a composition, authorizing users, providing tools to users, and payment or subscription processing. Other modules may be added as a function of the capabilities and desires of the collaborators. The DMP platform may be implemented as a cloud service, where functions are provided as a networked solution. The DMP platform may be implemented as distributed modules, where software is downloaded or otherwise provided to the collaborators. The modules ofDMP 110 includetools 116 which provide applications and utilities for each of the users. Thesetools 116 will typically include tools specific to the functions performed. In this way,tools 116 may include a set of tools for authors, a set of tools for videographers, a set of tools for editing, a set of tools for compilation, and other functions. Thetools 116 may further provide access to other applications which are not included in theDMP 110 but are available through a networked connection, such asInternet 120. In some examples, participants are able to access external applications and tools, such as third party applications or Tools as a Service (TAS) applications, whereby,tools 116 may interface with Application Programming Interfaces (APIs) seamlessly. In this way, the participant may select the feature or application desired, andtools 116 will set up the connection automatically and allow access to the application. - Users may access
tools 116 according their role or identify, as well as according to the production arrangement. The tools may be provided as services or may be downloadable as widgets for use at the collaborators computing or mobile device. Thetools 116 may further provide interfaces and APIs to the user for interfacing with external devices, such as cameras, lighting equipment, sound equipment, digitizing devices, website, other resources and software programs. Thetools module 116 may further provide drivers for control of external devices and software desired for use on the collaborative project. Thetools module 116 maintains these various mechanisms and works in cooperation with the other modules withinDMP 110, such as theauthorization module 118,compilation module 112, andpayment module 114. - The
compilation module 112, according to some embodiments, allows users to build the multimedia work by compiling the various components generated and contributed by each of the collaborative users. Thecompilation module 112 processes uploaded files and video to allow fast online processing. For instance, characters, scenes, shots within scenes, commands, dialogues, actions, and comments are created and included during the script writing process to build videos initial structure. Such structure is automatically integrated into video editor timeline. Comments may be part of the shot metadata that users, specifically actors and cameraman, can input to describe building blocks of elements used to create the scene such as type of furniture, clothing, jewelry, accessories, and so forth, to enable viewers to select these items while watching the video to determine vendors selling these items online, in stores, or in nearby stores depending on user's location. This embedded advertising becomes part of the revenue models for this novel web application. Furthermore, high-quality videos are converted to low-resolution files during the upload process to enable users to edit them on the fly, green or blue background screens are automatically removed, and videos are trimmed to assign each trimmed video file to its corresponding slot in the video editor timeline After the video editing process is complete, the compiler renders the video to its original high quality resolution for online, broadcast, or cable distribution. Information included in the script, such as characters, scenes, shots within scenes, commands, dialogues, actions, and comments, may be integrated with the video during the rendering process to provide keywords and descriptions that may be used to promote the video, associate relevant commercials and advertisement during viewing, and help search engines identify clips within the video. This data may be stored in a new format with the video data, or may be stored in a separate file mapped to the video data. A web application may include HTML and style sheet documents, which provide the graphics and look of the webpage, which are downloaded to users' drive and cached. It may also include text files, which are validated by the browser, such as XML, java, flash or other files. Theauthorization module 118 identifies users by identity, such as by roles or contribution, and applies rules for processing and enabling operations. Theauthorization module 118 assigns and monitors rights are based on a processing scheme. In some embodiments the processing scheme is predetermined prior to starting a collaborative project or work. In some embodiments the processing scheme may be dynamically modified by an administrator. Theauthorization module 118 works in coordination with thepayments module 114 to bill participants and to verify payment for each collaborative process according to the processing scheme. The payments may be based on collaboration parameters, such as by data content or by time used. Further, the payment module may enable a profit-sharing or other arrangement. Thepayments module 114 provides the payment status information to theauthorization module 118; in response, theauthorization module 118 may enable or prohibit users with respect to the various functions of theDMP 110. - The
DMP 110 may be further offered as a cloud service, such as Software as a Service (SAS). In such an environment, theDMP 110 platform may upgrade the various modules without interruption or action by the users. The collaboration of users is then facilitated through the cloud service(s), enabling collaborators to work together asynchronously but with the most recent versions and information. The cloud service may access other information available through the Internet, and may also access proprietary databases of the collaborators. Where the service is provided as a platform or application in the cloud, the service is then available for easy access from any device having capability to access the Internet or network. The ability to collaborate from anywhere provides users with enhanced flexibility. Similarly, multiple users may decide to collaborate in real-time on complex scenes, layered storyline, or live feeds. - The
DMP 110 may be used for Internet productions and publications, such as video and TV applications available at sites on the web. TheDMP 110 is configured for use and communication with Internet protocols. TheDMP 110 may post or publish video content and monitor its use and viewing statistics. This information may be used as feedback in further development of a given project or as survey type information for future projects. TheDMP 110 may be used to create casting calls or review screen play snippets. This may extend to film festivals for coordination and planning of events. - Individual films may be created on or provided to the
DMP 110, for review, scheduling and selection by a film review committee. In this scenario, the reviewers could provide critique and edits to the film, having ability to manipulate scene information. This is available as the project is configurable by theDMP 110. - In some examples, a
DMP 110 eliminates costly tools, equipment and royalties by providing or recommending video capture kits with camera, microphone, green screen, lights, and so forth, as well as providing royalty free stock footage and soundtracks. TheDMP 110 enables asynchronous shots taped by actors to be assembled into a single shot within a scene, in accordance to script information, to provide streamlined production processes. The production processes provides simple writing tools which expands an idea to a detailed screenplay. Further, theDPM 110 provides powerful editing tools to layer video elements, incorporate and modify video and audio elements, title and subscript scenes, add effects and transitions into a high-quality video production. Similarly, multiple users may decide to collaborate in real-time on complex scenes, layered storyline, or live feeds. - In one example, social networking tools allow writers, producers, actors, cameramen, and artists to collaborate and share work at any stage using a computing or mobile device. Such a collaborative platform may be used to create videos including short videos of offbeat comedy skits, spoofs, training videos, commercials, infomercials, documentaries, full length movies. In some examples these collaborations may produce videos of short duration, less than ten minutes, or long durations. The collaborative platform accommodates multiple contributors. A producer, writers, editors, actors, cameramen, artists, musicians, sound engineers, and others may all participate and contribute at different stages of the video production. The roles of the participants may include producers, writers, actors, cameramen, engineers, editors, and so forth.
- In some embodiments, a producer is an authenticated owner of a particular production having ultimate control over its metadata, access rights, scene releases and credits. The producer may post a call for writers, actors, cameramen, or others for the project. The producer selects and authenticates writers, actors and other participants. Writers are authenticated users granted access to a page for editing the script, referred to as the Edit Script page, for a particular scene or all scenes in a production. There may be multiple writers for a single project. The writers may have a partition that allows them to collaborate among themselves prior to posting their writings for viewing, critique, and learning by others. Once the writings are so posted, an editor or producer will review, comment and revise the writings. Script may include characters, scenes, shots within scenes, commands, dialogues, actions, and comments. An editor is an authenticated user granted access to a page for editing the video, referred to as the Edit Video page, for a particular scene or all scenes in a production. The actors then act out the writings, or script; the actors are authenticated users having a defined character role in a particular scene and therefore are granted access to a page to upload clips, referred to as the Upload Clip page, for that scene. Actors may include celebrities providing cameos which may be integrated into the video project. An artist is an authenticated user given the task to generate background images and videos for given scenes when directors/editors cannot identify suitable ones in the application database. Engineers and musicians are authenticated users given the task to generate sound effects, video effects and music for given scenes when directors/editors cannot identify suitable ones in the application database. Administrators are DMP personnel having access to certain editorial functions. Super Administrators are DMP technical personnel having access to user accounts and low-level functions, as well as having control to configure the DMP according to a processing scheme.
- When a production is first created, its producer (or potentially the owner) has access to many functionalities, including multiple access rights, but they can also assign those rights to other users. The access rights include:
-
- a) Script Viewing: ability to view scene scripts (can be public).
- b) Commenting: ability to comment on scenes
- c) Script Writing: ability to create scenes, shots within scenes, and edit their scripts and character roles, add commands, dialogues, actions, and comments.
- d) Editing: ability to sequence uploaded clips, add effects, titles, transitions within the editor
- e) Upload: general file upload rights, which may include green or blue background removal, video trimming, and linking files to their corresponding slots within the video editor timeline
- f) Casting: ability to assign users to character roles
- The
DMP 110 supports a variety of processing functions, some of these are detailed below according to an example embodiment. - This function is based on the type of user and currently selected element. Below are few of the types of script elements supported:
-
- 1. Shot—a single camera angle
- i. Horizontal slider: angles from −90° (left) to 90° (right)
- ii. Vertical slider: angles from −90° (down) to 90° (up)
- iii. Depth of View slider: values −10 (wide angle) to 10 (closeup)
- iv. Transition to next shot (optional)
- v. Suggested length: auto checkbox allowing override of length field only if this scene has not yet had its video edited
- 2.Action—direction for movement of a single actor
- i. Character selection menu
- ii. Start position selector (clockface)
- iii. End position selector (optional)
- 3. Dialog—lines to be delivered by a single actor
- i. Character selection menu
- ii. Delivery extension field
- 4.Command and comment lines
- i. Placeholders for videos uploaded by social media users
- ii. Marketing material
- iii. Users comments
- 1. Shot—a single camera angle
- Lighting settings may be set in a similar way without being the same as actor/camera settings.
FIG. 2 illustrates a table depicting an actor's positions and angles with respect to their own camera/green screen and to each other. Such guidelines may be integrated with the script to facilitate the video production process. - In developing a production, a script writer may include additional fields to enable seamless integration with video editor and to allow actors to easily determine how to shoot and time their videos.
FIG. 2 illustrates an example scenario of aframe 200 having multiple fields. A timeline track displaying information from the script alongside the actual clips being tied together may be used as a control, but moves in tandem with the actual time line content as it's zoomed and scrolled (like the Ruler control). For instance, an editing panel may appear when a shot clip is selected in the timeline, offering the following elements: -
- 1. Background continue toggle allows for the background clip from the previous shot clip to just be continued
- 2. Background drop well, visual clips can be dragged here to indicated background if toggle is selected
- 3. Character menu lists characters appearing in the selected shot and controls content of the elements:
- a. File selection media browser displays just the takes uploaded by the character's user for this shot, so one can be selected
- b. Layer button set offers ability to send character frontwards, backwards, to the front or to the back
- c. Trim control allows trimming of selected file from beginning or end
- d. Offset control allows incremental resequencing of selected file
- e. Hue, saturation, contrast and brightness slider controls
- f. Position control allows character to be moved onscreen
- g. Resize control allows character to be sized onscreen
- The collaborative online video production application and its associated payment stream models. These new types of online payment streams are based on the application ecosystem ranging from the collaborative environment, video content, talented users, target audiences, to partners. In some embodiment, the
payments module 114 calculates fees for accessing talents promoted by the application. Access may be by internal or external users/consumers. For instance, a producer may want to hire a video editor, script writer and actors to manifest their vision for a production. Thepayments module 114 may further incorporate a payment transaction charge as a flat rate, one-time payment, royalties, or a full license to the application. Subscriptions may be implemented to provide different rates to groups and video production channels of relevance to the consumer. A reward program may be implemented by ranking videos and types of users. A reward program may consist or linked to points collected by users depending on their contributions and or revenue generated by their videos. In one embodiment theDMP 110 matching users with each other or with the consumer, branding videos to further promote very successful (viral) videos. - The
DMP 110 may be used for engagement and interactivity with the audience, such as fans, sponsors, partner, and so forth. Thesystem 100 further allows for partnerships with third party distributors, vendors, and services. The DPM further provides expanded access to royalty-free stock content library and to effects, transitions, themes and so forth. - Some embodiments implement transaction fees for payment transfers between accounts. Advertising may be displayed on the DMP site and in correspondence, with the ability to block ads on the site and in correspondence. Advertising returns may be applied by the
payments module 110 where site content is displayed or otherwise used on third party sites and services, and wherein the ability to retain or regain ownership of user content is provided through theDMP 110. Further, theDMP 110 may be used to account for and process hosting fees for podcast channels - The following describes a
video production system 200, illustrated inFIG. 3 , which distributes video production so as to satisfy requirements of collaboration among script writers, producers, actors, cameramen, stage artists, and musician are scattered all around the globe and may be unaware of each other's presence. In this embodiment, an online produCtion of distributed multimedia tool referred to as a CRU or CRU tool, alleviates many of the video production challenges by opening up the video creation, production, and distribution process to a group of users, and may even open the process to the general public. The CRU tool democratizes the digital video process to empower a whole new generation of artists, writers, contents, and markets by exponentially increasing the number of professional and amateur video creators and players contributing to the whole digital video content and economy. TheCRU platform 212 includes a variety of elements and functionalities. As illustrated inFIG. 3 , thesystem 200 includesmultiple CRUs viewer interface 218,customer interface 220, andadvertiser interface 222. The CRUs are further coupled to a production environment including a variety of elements and functions. - One production function is referred to as the
Script Dicer module 208, which enables script writers to enter their scenes, lines, storyline in a creative, collaborative way to enable actors and producers/directors to seamlessly assemble the video. Such script dicing includes, but is not limited to, tagging/linking each scene, actor line, location, time. - Another production functionality is the Actor Video/
Audio Captor module 204, where each participating actor is offered a toolkit used to homogenize the scenes. These kits may be provided for under a variety of scenarios, including for fee or as part of a complementary software development kit. Such kit may include a green/blue/unicolor background screen, microphone, video capturing camera, and/or an illumination light source. Depending on the scene, actors may be given guidance on how to position the camera and illumination source. The actor toolkit may include a driver to seamlessly interface with the CRU cloud. - The Producer/
Director Control module 206 functionality component of the CRU platform enables a producer/director to integrate all video elements by making associating actors, cameramen, background video or images, and music to each scene before final editing and production. - Another production function is the
music module 210 that enables a musician to upload, create, and edit the soundtrack that is suitable to video scenes. It also includes a database of music tracks from which to select. Such music tracks may be labeled/tagged, and are not necessarily limited, by type, instrument, length, modularity, genre, and so forth. - Still another production functionality is the Background/
Stage module 202 which enables photographers, cameramen, artists or amateurs to upload, create, and edit static, animated, or videos suitable for scene background. It also includes a database of such material from which to select, such as when a unique background is not desired. Such background images/videos are labeled/tagged, but not necessarily limited, by type, day time, size, duration (such as for videos), modularity, and genre. Many factors are considered when combining actors' videos with background scenes to homogenize the video. For instance, lighting and camera angle are some factors that are typically taken into consideration during selection and integration process. Thesystem 200 allows artists and amateurs to upload their images and videos using different angles or 360 degree viewing capabilities as it is the case of three dimensional maps. - The
system 200, including CRU platforms and services, brings the collaborative video making experience to multiple people without requiring them to go through years of education and experience to penetrate such industry and create new industries based on the creativity and free exploration CRU users enjoy on an individual basis or collectively. - With the proliferation of social networks and video sharing and distribution sites, the
systems - A master user has the responsibility to invite participants, assign roles, and oversee content capturing and production processes. Each user is able to see all contents generated by users in real-time or archived, but only master note is capable of activating a subset of users to interact on given scenes of the video.
- In these collaborative systems, a set of tools may include a green/blue background, video/audio capturing mean such as video camcorders, software interface an drivers. The user interface is a Graphic User Interface (GUI) and hardware interfaces which are linked to the CRU.
-
FIG. 4 illustrates the various video elements according to one embodiment, where variables represent the parameters and features of the video. The video elements are specified byfield structure 300 includingbackground field 302,music selection field 304, and user feed fields 306, 308, 310. The Tb/Tm represent the type of background and music; the Gb/Gm represent the genre of the background and music; Db represents anytime of background music; Im represents instruments; Lb/Lm represent duration of background and music; rj represents position; tj represents a time stamp; Aj represents angle and illumination of jth actor. The type of background is identified by the variable Tb. Types of backgrounds include static images such different angles of office or restaurant areas, or video background such as moving car or beach scenes. The type of music is represented by the variable Tm such as suspense, cheerful, sad, or sound effects. The genre, represented by the variable G may include comedy, drama, horror, action, documentary, newsfeed, storytelling, sports, social, or kids. The instrument(s) used in the audio are represented by I. The duration of the background scene or music is represented by the variable L. The position of the actor within a shot is represented as rj′; and the time stamp is represented as tj. The angle and illumination of a jth actor, with respect to a reference is represented as Aj. This scenario enables multiple users and allows these users to upload video files. - Editing, integrating, and rendering online video may be accomplished by reducing video quality during the upload process, using distributed servers that process and run specific or general editing, integrating, and rendering requests to recover original video quality. In one embodiment a CRU video editor includes a unique feature that dynamically adapt the video capturing and illumination angles of the different videos that will be eventually combined to create the final scene.
- In terms of the services offered using CRU engine. Any user can initiate the video creation process, such as an amateur who can simply post their simple ideas. Such posting may also initiate an alert signal or message to script writers, directors, actors, cameramen, and musicians (other participants) interested in similar ideas to further advance the collaborative video process. Industry players looking to create commercials for their products can use CRU to create competition among users to create winner commercial.
- Advertisers of products and services having a relationship to a particular video theme or genre, or desiring to make a connection with a particular audience, are able to advertise their products or services, and act as participants. By incorporating an advertising function provides a revenue stream for video producers. The CRU platform may be provided as a free service to all users at all levels. In some embodiments, users may search certain levels, such as actor, script writer, musician, director level after they achieve a particular goal. In one scenario, the goal may reflect successful accumulation of a number of points. This may be based on the number of released videos from a given user's contributions.
- A CRU participant may advertise a video project on the social network(s), where their interest graph identifies potential participants. Social networks may also be used to advertise the video after completion. The CRU may incorporate its own video distribution channels as well as conventional hooks to social media. The CRU engine keeps track of CRU videos activities and revenue regardless of where they reside.
- A CRU system may include an internal system to enable CRU users to monetize their contributions and develop a reputation within the CRU community. This will attract others and create groups of users active in the video production business.
FIG. 5 illustrates the functional building blocks of a Graphical User Interface (GUI) expressed in a home page 500. - In this embodiment, the home page presents a variety of different functionalities for users. A user may share an idea to solicit interest from script writers, Directors, Actors, cameramen, musicians, sound effect, visual effects, or background scenes and videos. A user may further insert a screenplay script manually or dynamically by uploading script files. The user may select a role, as in selection box 502. For example, a user may select a role as a director, actors, cameramen, sound engineer, score composer, or music content creator, or artist creating visual effects, background images, videos, and so forth.
- In one scenario, a director allocates roles based on the script and has the right to modify the script at his leisure while notifying other project members, who also may provide their inputs to the script for the director's review and acceptance or rejection. In this scenario, the director receives the modifications and additions, but has the right to modify the script so as to avoid simultaneous or conflicting changes. Each actor may have multiple insertion points in a script in a given film. Additionally, the script may include lines that will be eventually filled by the actor during video shoots. For example, a director may decide to shoot the same scene using different angles or facial expressions and then decide which ones to use during editing.
- A database of information may include multiple partitions, and is used for storing ideas, scripts, names of directors and actors, cameramen, sounds, visuals, which are selected in the : this will be transparent to users as some of them will have access to view and utilize other projects contents, such as after paying a higher subscription fee. This fee may be shared with other users who produced these videos. These elements are accessed by selection of the database selection box 504.
- On the homepage GUI 500, users can view cool videos, and then may be encouraged to either register or login to learn about how these videos are created. The homepage 500 further includes tools, accessed through a tool selection box 508. The tools are for development, editing, effects editing, publishing, and so forth.
- Casting agents may also be given the opportunity to register and login to view the actor's audition videos and are encouraged to give feedback. Casting agents interested in communicating directly with actors may be asked to pay a fee to access such a service. Such payment scheme may assign fees to be collected when actors purchase their video kits. The kits may be part of the tools, and provided as a development tool kit. The video creation process is presented in a linear fashion, where the users may follow a plan to build the video, or participants may add their portions asynchronously, allowing the video to develop through an iterative process.
-
FIG. 6 illustrates a user GUI 600 for inputting an idea for a project. Theselection box 602 identifies what type of project to create, whether it is a new project, or continuing an existing one. The user may also select from archived elements to configure a team to build a project. -
FIG. 7 illustrates auser GUI 700 for inputting script information. Thescript input box 702 may be an area where the user identifies the script specifics, or may be an area to upload a script created off-line. The script may be identified by standard or agreed upon format. -
FIG. 8 illustrates auser GUI 800 for inputting director's instructions, guidance and notes. The director creates a group of insertion points. As illustrated, the director'sGUI 800 identifies a group Sn A1 Tn1 that refers to actor A1 in nth scene ore shot at time start Tn1. The group is a collection of points that the director has created that will be filled by video of actors which may be filmed later. Actors and other project members can view the project at any time but may have no rights to modify contents except for their own contributions. The director acts as the master participant and has higher authority and control than other participants.Master control portions 802 identify those areas that are used to implement the director's decisions. The director will specify the particular components for each scene, as well as the participants and their roles. Director and editor roles may be identical in this novel online application. -
FIG. 9 illustrates auser GUI 900 where the actors may respond to control instructions of the director. The actor acts as a slave to the directions of the director. Theslave control portions 902 identify those areas that actors use to implement the director's instructions. - Actors use the recommended video kit elements to record the different videos assigned by director. The GUI will guide them on where to position the light, camera, and other items such as fan, eyeglasses, item in hand. As illustrated in
FIG. 9 , theuser GUI 900 presents options to the actor to select a scene, sounds, and so forth. This illustrates the slave mode of the system, which allows actions in response to the master. For voice over functions the user may play the video and add the voice when appropriate, such as in an animated project. - In one embodiment, illustrated in
FIG. 10 , a video project is put together to illustrate multiple video portions and application of the audio portions. Textual information may be provided to instruct actors are other participants. An “auto” button may be checked to allow the video editor to automatically adapt to uploaded video durations. Various fields may be used to label each shot as part of a given scene. Adjustment controls, such as horizontal, vertical, and deep sliders may be used to provide actors with desired shooting angles. In the present example, the camera is shooting horizontally from −14 degree angle and the shot duration is set to 1.8 seconds. The script dialogue, shots, and actions are embedded into the video editor. -
FIG. 11 illustrates one example of avideo file uploader 1100 equipped with functionality to modify a green screen or blue screen. As illustrated, theuploader 1100 includes various sliders and adjustment mechanisms. Theuploader 1100 may be used to remove the green or blue background of an uploaded video. In one example, a user may drag a color square along the green shade spectrum or along the video itself, and in this way, reduce, eliminate or adjust the green/blue color of the background. Thevideo file uploader 1100 is adapted to upload a user's files where general files may be uploaded to video production general folders. These general files are added to web application general database. A user selects destination of files associated with a shot to include them in the corresponding script section in video timeline. A user is able to remove green and blue backgrounds of uploaded videos. A user is able to trim videos to comply with the script, and adjust according to a timeline. A user is able to edit videos and may upload video file that includes multiple shots while indicating start and end times of each shot or scene. -
FIG. 12 illustrates theuploader GUI 1200 for storing the video. Theuploader 1200 provides users with the scenes, shots, and character selection. Theuploader GUI 1200 may include a variety of configurations, such as identifying the timing on a timeline for placement of the uploaded content. -
FIG. 13 illustrates a method for generating a multimedia project according to an example embodiment. Themethod 1300 starts with a new video idea or master uploading or entering a script,operation 1302. The uploading may be done by a user initiating a project or may be in response to a request received from a director or other project initiator or owner. If the script is uploaded, the system works to extract script component information and apply this information to a timeline for the project. The extracted or entered components may include scenes, shots within scenes, commands, comments, characters, characters assignments, dialogue and character lines. The participant may then send out invitations to potential or desired participants,operation 1304. The invitations may be posted on a designated website, may be sent out to individuals through email, social network, professional network or other community communication network. These invitations may be to fill specific roles, such as characters, and may also be for technical editors, video editors, script writers, photographers, and other roles needed for collaboration on the project. Responses are received through the system,operation 1406. The participant may then select other participants from the responses received,operation 1308. The participant may request further information, similar to auditions, so as to complete the selection process. The process then assigns roles,operation 1310. The collaboration is then incorporated into the multimedia production environment. The script is effectively overlaid on a timeline and characters per scene are placed at the time when their action occurs. This allows collaborators to add their contributions to the correct position in the project. In one embodiment, the script is tagged and the components each have a unique identifier. When other collaborators build and create content and contributions, the systems tags these so that they are seamlessly added to the project. In this way, video for a given scene or corresponding shots is uploaded and mapped into the project at the correct slot in video editor timeline. In some embodiments, the user merely posts the contribution to the project and the system reads the contribution tags and incorporates according to the tag. Tagging allows the system to automatically perform steps that were done manual in previous systems and solutions. This allows the system to incorporate script components into the video production environment or other multimedia production environment. - In these and other embodiments, a video production web application incorporates a collaborative environment providing invitations to participants, similar to a call for papers or review in an academic setting. The invitation may be provided to a designated group or to a general public audience. The master initiates a session by uploading or incorporating a script to the system, thus triggering an invitation mechanism to invite users to participate in video application. The script may include characters, scenes, shots within scenes, commands, dialogues, actions, and comments. The Invitation is sent to potential participants. This may involve sending an email to a user account, or to a social media contact. In one embodiment, the invitation is posted for review and acceptance by multiple potential participants, such as posting on a social media site. For example, a director may assign the producer role to a video production owner, who then selects crew from respondents. The producer then assigns roles to individual participants selected from the respondents. If there are no satisfactory respondents, then the producer or master may send out a specific invitation to one or more desired participants to fill a role.
-
FIG. 14A illustrates ascript writer tool 1400 that includes modules forscript file storage 1402, script component extraction andmapping module 1404, character selection androle assignment 1406, instructions andsettings 1408, timeline incorporation of thescript components 1410, and editing thescript 1412. The character selection includes both the original character creation as well as assignment of that character to a participant. The scenes may be a collection of video shots, or master created scenes. The scene may specify the background, descriptions and flavor of the scene. Technical directions may include the shots to take for a given scene and sequence, as well as camera angles, lighting specifics, and so forth. Thescript writer tool 1400 allows the master and other participants to add commands and comments to the various scenes, characters and other instructions. Authenticated users may access the script in a file format. - The
script writer tool 1400 is used to create, edit, and modify the components of a script, such as action, command, and dialogue. The action describes the scene and motions, the command provides further instructions, while the dialogue provides the lines the characters speak. In one embodiment, the dialogue is provided on the scene for adding in audio after filming, such as karaoke videos. - The
script writer tool 1400 enables the script writer to format according to multiple aspects, such as to adjust the typeface/font, line spacing and type area, language, as well as to specify the page per minute of screen time. This enables the script writer to adjust the script according to venue, such as for an American or European movie. The script writer may further edit according to prose, such as to focus on audible and visual elements. The prose selected by the script writer will provide explanations for the participants. - The
script writer tool 1400 may further include a storyboarding module to enable the script writer to develop a story line which can be translated into the final video scenes. The storyboard module may start with an editable template that enables the user to quickly build a story line, such as to have drag and drop features, people, actions, and scenes. The storyboard module may be useful in creating an animated portion of a movie or an entire animated movie. - In one embodiment, the
script writer tool 1400 includes a digital rights management module, 1420 which may incorporate multiple modules. A first module may be used to verify the material incorporated into the script is not infringing the copyrighted material of others, such as to compare to a database external to thescript writer tool 1400. A second module may be used to apply a Digital Rights Management (DRM) security mechanism, such as encryption or other means. - The script
file storage unit 1402 stores the script created and uploaded by a writer, director or other with privileges allowing inputs to the script. The script file may be edited by multiple authorized collaborators. Each script includes a variety of components, such as characters, scenes, actions, background, music and audio information and so forth. The scriptcomponent extraction module 1404 identifies these components in the script file and uses this information to identify the roles that will be used to prepare the video film project. For example, the scriptcomponent extraction module 1404 identifies a character, and then enables the director or casting director to select a collaborator to fill this role. The selected collaborator, or actor, is then given privileges which allow the collaborator to access the script, the characters lines, definition and actions, as well as to upload their contributions. In this example, the actor's contribution may be a video of the actor acting out their lines. The scriptcomponent extraction module 1404 identifies the time when the actor's lines are to occur in the video project. - The script
component extraction module 1404 creates various files for the components of the script file. These files are then used to compile the contributions of the various contributors into a final product. The scriptcomponent extraction module 1404 works in coordination with thetimeline incorporation module 1410, which receives the contributions of the collaborators and incorporates them into the timeline. In this way the script provides the plan for the video project. The components include characters, instructions, settings, and definitions, wherein the collaborators use the components to create their contributions. The received contributions are then implemented into the video project. - The
script writer tool 1400 enables collaborators to edit the script, when the collaborator has editing privileges. Theediting module 1412 enables such editing of the script file. There are a variety of ways for multiple collaborators to edit the script. In a first embodiment, the collaborator edits are identified as changes to the script. The director may accept or reject the edits. The edits may be presented to multiple collaborators for group acceptance and discussion. Once accepted, the edits become part of the script. -
FIG. 14B illustrates an embodiment of ascript intake module 1450 which receives the script creations and components from thescript writer tool 1400 and extracts information from the script for distribution throughout the collaborators. This enables each participant to provide their portion of the movie while understanding the context and other components of the production. Thescript intake module 1450 includes ascript component extractor 1452 and adialogue extractor 1454, which extracts the characters and dialogue from the script. These components are stored accordingly and role assignments are applied. For example, a main character is associated with that character's scenes and lines in the script. The participant selected as the main character will be authorized to access this information. The actor will further be able to upload their video and audio portions. Amodule 1456 applies the timing overlay to the script, by coordinating the script to the timeline. Thescript intake module 1450 further distributes the script components, such as lines, timing, technical features, and so forth to the collaborators. - The system adds the results of the
script writer tool 1400 to the video production environment, and adds scenes, shots, and characters to the video production page.FIG. 15 illustrates avideo editor 1500, having modules for script andrelated information 1502, image, video andaudio file handling 1504,editing tools 1506,timeline editor 1508, andvideo viewing window 1510. A user may select a scene from the video production page to edit shots and to assemble the scene. The user may add transitions between scenes. The final video is rendered to its original video quality after all scenes are successfully assembled. An optionalfilm stock module 1520 may be included to access film stock available either freely or for fee. Such film stock may be incorporated into the movie. - As social media and mobile applications have exploded with the introduction of ever smarter smart phones, the present techniques of merging script information with video/audio project information in a collaborative environment is particularly vital.
FIG. 16 illustrates a mobiledevice display screen 1600. The mobile application for the collaborative video production product provides avideo display portion 1602,timeline portion 1604 which corresponds to the video displayed, and acontrol portion 1606. Thecontrol portion 1606 may include a variety of controls, from drag and drop instructions that allow the user to edit the video by dragging control elements to the video to social network interfaces that allow sharing of the video editing real time. In one scenario the video or multimedia project is displayed for multiple users in real time. The collaborators may discuss the video using their mobile devices, or one or more collaborators may be using their PC or other computing or mobile device. In one embodiment a user having a mobile e-reader may send script or other information to other users from the e-reader. Some mobile devices have capability to perform readability and other statistical calculations, which may be performed on the video project and then provided as feedback to other users. Still other embodiments may provide analysis and use information which may be used to refine the video project, or to identify advertisers. In one embodiment, the collaborators access a third party service which identifies images in the video project and match these to brands and products. This information may be used to procure advertising revenue from these companies. Still further, the mobile application may connect to social media allowing easy upload and presentation on Internet and other applications. The collaborators may solicit feedback and suggestions from viewers to refine and improve the video. The mobile application may store the video project and associated data in a cloud environment, where multiple collaborators can access and edit the project. - In one embodiment templates are provided on which a multimedia production may be built, such as for a horror movie, the various scene selections may be provided as well as character information, scary voices and noises, as well as links to information on this genre. Users may also build templates, such as for a series of movies or productions with a common theme, such as a science series. Educators may use the collaborative system to build projects with students, where the educator enters the script information, which may narrative or text book scripting such as for a documentary, and students access this information and add to the project. The end result is a multimedia presentation illustrating concepts learned.
- Sports casters may use such a system to incorporate footage taken by local photographers and incorporate into nationwide or worldwide video feeds, and other projects. The sports caster provides a script identifying the information desired for the video or sports cast, identifying specific views from specific locations, footage of specific teams and players, and so forth, wherein the sports casters send out a request of participants. As local participants respond, they are able to send their video footage to the sports caster specifically identifying which information they are providing. The sports caster does not have to go through the videos to manually position in the film, but they are already marked according to their location on the timeline according to the script. The editor then merely watches the films to select the one desired.
- When movie fans, amateurs, want-to-be actors, cameramen, directors, editors, special effects, artists, musicians, and so forth, all join forces to create their own video production with unlimited freedom, a whole new generation of video content emerge. By including interaction with script writers during the production process, the script is seamlessly embedded into the video editor to balance tasks among production team. The outcome is a diverse and global user community that includes students, writers, actors, cameramen, artists, filmmakers, musicians, educators, journalists, travelers, activists, sports enthusiasts, and bloggers - basically anyone who wants to create original video content. A variety of new types of partnership-based revenue are enabled by this novel collaborative online video production system. Actor Kit Vendors, such as companies selling Video camcorders, green/blue screens, external microphones, and Lighting, may use the collaborative system to enable sale of their goods. Advertisers may advertise on the system for consumer goods, media sites, movie & TV releases, and events, specifically targeted at the video creators and may advertise on the resultant video. Service Providers may include talent agencies, talent coaches, art schools & programs. Industry Productions may create commercial videos, host best video competitions, as well as to provide advertisements, announcements, tutorials, training materials, news feeds, and travel videos. Cable networks may license such application to produce its video ads and content.
- While various DPM and CRU configurations and elements are illustrated and various apparatuses are configured in accordance with one or more features described in this disclosure, it is understood that many modifications and variations may be devised given the above description. The embodiments and examples set forth herein are presented so as to explain the present invention and its practical application and to thereby enable those skilled in the art to make and utilize the invention. However, those skilled in the art will recognize that the foregoing description and examples have been presented for the purpose of illustration and example only. The description set forth is not intended to be exhaustive or to limit the invention to the precise form disclosed. Many modifications and variations are possible in light of the above teaching without departing from the spirit and scope of the following claims.
Claims (18)
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US13/283,575 US8341525B1 (en) | 2011-06-03 | 2011-10-27 | System and methods for collaborative online multimedia production |
US13/679,893 US20130151970A1 (en) | 2011-06-03 | 2012-11-16 | System and Methods for Distributed Multimedia Production |
Applications Claiming Priority (5)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201161493173P | 2011-06-03 | 2011-06-03 | |
US201161498944P | 2011-06-20 | 2011-06-20 | |
US201161514446P | 2011-08-02 | 2011-08-02 | |
US201161626654P | 2011-09-30 | 2011-09-30 | |
US13/283,575 US8341525B1 (en) | 2011-06-03 | 2011-10-27 | System and methods for collaborative online multimedia production |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US13/679,893 Continuation-In-Part US20130151970A1 (en) | 2011-06-03 | 2012-11-16 | System and Methods for Distributed Multimedia Production |
Publications (2)
Publication Number | Publication Date |
---|---|
US20120311448A1 true US20120311448A1 (en) | 2012-12-06 |
US8341525B1 US8341525B1 (en) | 2012-12-25 |
Family
ID=47262667
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US13/283,575 Active 2031-11-09 US8341525B1 (en) | 2011-06-03 | 2011-10-27 | System and methods for collaborative online multimedia production |
Country Status (1)
Country | Link |
---|---|
US (1) | US8341525B1 (en) |
Cited By (92)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20130159708A1 (en) * | 2011-12-19 | 2013-06-20 | J. Michael Miller | System and method for the provision of multimedia materials |
US20130238724A1 (en) * | 2012-03-06 | 2013-09-12 | Apple Inc. | Sharing images from image viewing and editing application |
US20130272673A1 (en) * | 2012-03-13 | 2013-10-17 | Lee Eugene Swearingen | System and method for guided video creation |
US20140047035A1 (en) * | 2012-08-07 | 2014-02-13 | Quanta Computer Inc. | Distributing collaborative computer editing system |
WO2014134282A1 (en) * | 2013-02-27 | 2014-09-04 | Randall Lee Threewits | Interactive environment for performing arts scripts |
EP2775703A1 (en) * | 2013-03-07 | 2014-09-10 | Nokia Corporation | Method and apparatus for managing crowd sourced content creation |
US8888494B2 (en) | 2010-06-28 | 2014-11-18 | Randall Lee THREEWITS | Interactive environment for performing arts scripts |
US20140348489A1 (en) * | 2013-05-21 | 2014-11-27 | Sony Corporation | Post production replication of optical processing for digital cinema cameras using metadata |
US20150067058A1 (en) * | 2013-08-30 | 2015-03-05 | RedDrummer LLC | Systems and methods for providing a collective post |
EP2892015A1 (en) * | 2014-01-06 | 2015-07-08 | HTC Corporation | Media data processing method and non-transitory computer readable storage medium thereof |
US9106812B1 (en) * | 2011-12-29 | 2015-08-11 | Amazon Technologies, Inc. | Automated creation of storyboards from screenplays |
US9122656B2 (en) | 2010-06-28 | 2015-09-01 | Randall Lee THREEWITS | Interactive blocking for performing arts scripts |
US20150324345A1 (en) * | 2014-05-07 | 2015-11-12 | Scripto Enterprises LLC | Writing and production methods, software, and systems |
US20150380052A1 (en) * | 2012-12-12 | 2015-12-31 | Crowdflik, Inc. | Method and system for capturing, synchronizing, and editing video from a primary device and devices in proximity to the primary device |
WO2016005835A1 (en) * | 2014-07-10 | 2016-01-14 | Dna Paths Inc. | Aggregating and sharing an assembly of elements pertaining to a final creative work |
US20160133294A1 (en) * | 2014-11-08 | 2016-05-12 | Wooshii Ltd | Video creation platform |
US20160225409A1 (en) * | 2013-09-08 | 2016-08-04 | Kayihan ERIS | System of automated script generation with integrated video production |
US20170076752A1 (en) * | 2015-09-10 | 2017-03-16 | Laura Steward | System and method for automatic media compilation |
US20170134450A1 (en) * | 2013-11-11 | 2017-05-11 | Amazon Technologies, Inc. | Multiple stream content presentation |
US9721611B2 (en) | 2015-10-20 | 2017-08-01 | Gopro, Inc. | System and method of generating video from video clips based on moments of interest within the video clips |
US9734870B2 (en) | 2015-01-05 | 2017-08-15 | Gopro, Inc. | Media identifier generation for camera-captured media |
US9754159B2 (en) | 2014-03-04 | 2017-09-05 | Gopro, Inc. | Automatic generation of video from spherical content using location-based metadata |
US9761278B1 (en) * | 2016-01-04 | 2017-09-12 | Gopro, Inc. | Systems and methods for generating recommendations of post-capture users to edit digital media content |
US9794632B1 (en) | 2016-04-07 | 2017-10-17 | Gopro, Inc. | Systems and methods for synchronization based on audio track changes in video editing |
US9792502B2 (en) | 2014-07-23 | 2017-10-17 | Gopro, Inc. | Generating video summaries for a video using video summary templates |
US9812175B2 (en) | 2016-02-04 | 2017-11-07 | Gopro, Inc. | Systems and methods for annotating a video |
US9838731B1 (en) | 2016-04-07 | 2017-12-05 | Gopro, Inc. | Systems and methods for audio track selection in video editing with audio mixing option |
US9836853B1 (en) | 2016-09-06 | 2017-12-05 | Gopro, Inc. | Three-dimensional convolutional neural networks for video highlight detection |
US9870134B2 (en) | 2010-06-28 | 2018-01-16 | Randall Lee THREEWITS | Interactive blocking and management for performing arts productions |
US9894393B2 (en) | 2015-08-31 | 2018-02-13 | Gopro, Inc. | Video encoding for reduced streaming latency |
US9922682B1 (en) | 2016-06-15 | 2018-03-20 | Gopro, Inc. | Systems and methods for organizing video files |
WO2018071557A1 (en) * | 2016-10-12 | 2018-04-19 | Lr Acquisition, Llc | Media creation based on sensor-driven events |
US9966108B1 (en) | 2015-01-29 | 2018-05-08 | Gopro, Inc. | Variable playback speed template for video editing application |
US9972066B1 (en) | 2016-03-16 | 2018-05-15 | Gopro, Inc. | Systems and methods for providing variable image projection for spherical visual content |
US9998769B1 (en) | 2016-06-15 | 2018-06-12 | Gopro, Inc. | Systems and methods for transcoding media files |
US10002641B1 (en) | 2016-10-17 | 2018-06-19 | Gopro, Inc. | Systems and methods for determining highlight segment sets |
CN108268432A (en) * | 2017-12-14 | 2018-07-10 | 中央电视台 | A kind of conversion method and device of program project file |
US10045120B2 (en) | 2016-06-20 | 2018-08-07 | Gopro, Inc. | Associating audio with three-dimensional objects in videos |
US10083718B1 (en) | 2017-03-24 | 2018-09-25 | Gopro, Inc. | Systems and methods for editing videos based on motion |
US10109319B2 (en) | 2016-01-08 | 2018-10-23 | Gopro, Inc. | Digital media editing |
US10127943B1 (en) | 2017-03-02 | 2018-11-13 | Gopro, Inc. | Systems and methods for modifying videos based on music |
US10185891B1 (en) | 2016-07-08 | 2019-01-22 | Gopro, Inc. | Systems and methods for compact convolutional neural networks |
US10186012B2 (en) | 2015-05-20 | 2019-01-22 | Gopro, Inc. | Virtual lens simulation for video and photo cropping |
US10187690B1 (en) | 2017-04-24 | 2019-01-22 | Gopro, Inc. | Systems and methods to detect and correlate user responses to media content |
US10185895B1 (en) | 2017-03-23 | 2019-01-22 | Gopro, Inc. | Systems and methods for classifying activities captured within images |
EP3432156A4 (en) * | 2016-03-14 | 2019-01-23 | Tencent Technology (Shenzhen) Company Limited | Matching method for partners in costarring video, and terminal and computer-readable storage medium |
US10192585B1 (en) | 2014-08-20 | 2019-01-29 | Gopro, Inc. | Scene and activity identification in video summary generation based on motion detected in a video |
US10192583B2 (en) | 2014-10-10 | 2019-01-29 | Samsung Electronics Co., Ltd. | Video editing using contextual data and content discovery using clusters |
US10204273B2 (en) | 2015-10-20 | 2019-02-12 | Gopro, Inc. | System and method of providing recommendations of moments of interest within video clips post capture |
US10250894B1 (en) | 2016-06-15 | 2019-04-02 | Gopro, Inc. | Systems and methods for providing transcoded portions of a video |
US10257266B2 (en) | 2013-11-11 | 2019-04-09 | Amazon Technologies, Inc. | Location of actor resources |
US10262639B1 (en) | 2016-11-08 | 2019-04-16 | Gopro, Inc. | Systems and methods for detecting musical features in audio content |
US10268898B1 (en) | 2016-09-21 | 2019-04-23 | Gopro, Inc. | Systems and methods for determining a sample frame order for analyzing a video via segments |
US10282055B2 (en) | 2012-03-06 | 2019-05-07 | Apple Inc. | Ordered processing of edits for a media editing application |
US10282632B1 (en) | 2016-09-21 | 2019-05-07 | Gopro, Inc. | Systems and methods for determining a sample frame order for analyzing a video |
US10284809B1 (en) | 2016-11-07 | 2019-05-07 | Gopro, Inc. | Systems and methods for intelligently synchronizing events in visual content with musical features in audio content |
US20190141217A1 (en) * | 2013-09-08 | 2019-05-09 | Kayihan ERIS | System of automated script generation with integrated video production |
US10315110B2 (en) | 2013-11-11 | 2019-06-11 | Amazon Technologies, Inc. | Service for generating graphics object data |
US10341712B2 (en) | 2016-04-07 | 2019-07-02 | Gopro, Inc. | Systems and methods for audio track selection in video editing |
US10339443B1 (en) | 2017-02-24 | 2019-07-02 | Gopro, Inc. | Systems and methods for processing convolutional neural network operations using textures |
US20190208287A1 (en) * | 2017-12-29 | 2019-07-04 | Dish Network L.L.C. | Methods and systems for an augmented film crew using purpose |
US20190206439A1 (en) * | 2017-12-29 | 2019-07-04 | Dish Network L.L.C. | Methods and systems for an augmented film crew using storyboards |
US10347286B2 (en) * | 2013-07-25 | 2019-07-09 | Ssh Communications Security Oyj | Displaying session audit logs |
US10347013B2 (en) | 2013-11-11 | 2019-07-09 | Amazon Technologies, Inc. | Session idle optimization for streaming server |
US10354008B2 (en) * | 2016-10-07 | 2019-07-16 | Productionpro Technologies Inc. | System and method for providing a visual scroll representation of production data |
US10360945B2 (en) | 2011-08-09 | 2019-07-23 | Gopro, Inc. | User interface for editing digital media objects |
US10374928B1 (en) | 2013-11-11 | 2019-08-06 | Amazon Technologies, Inc. | Efficient bandwidth estimation |
US10395122B1 (en) | 2017-05-12 | 2019-08-27 | Gopro, Inc. | Systems and methods for identifying moments in videos |
US10395119B1 (en) | 2016-08-10 | 2019-08-27 | Gopro, Inc. | Systems and methods for determining activities performed during video capture |
US10402938B1 (en) | 2016-03-31 | 2019-09-03 | Gopro, Inc. | Systems and methods for modifying image distortion (curvature) for viewing distance in post capture |
US10402698B1 (en) | 2017-07-10 | 2019-09-03 | Gopro, Inc. | Systems and methods for identifying interesting moments within videos |
US10402656B1 (en) | 2017-07-13 | 2019-09-03 | Gopro, Inc. | Systems and methods for accelerating video analysis |
US10453496B2 (en) * | 2017-12-29 | 2019-10-22 | Dish Network L.L.C. | Methods and systems for an augmented film crew using sweet spots |
US10469909B1 (en) | 2016-07-14 | 2019-11-05 | Gopro, Inc. | Systems and methods for providing access to still images derived from a video |
US10534966B1 (en) | 2017-02-02 | 2020-01-14 | Gopro, Inc. | Systems and methods for identifying activities and/or events represented in a video |
US10552016B2 (en) | 2012-03-06 | 2020-02-04 | Apple Inc. | User interface tools for cropping and straightening image |
US10601885B2 (en) | 2013-11-11 | 2020-03-24 | Amazon Technologies, Inc. | Adaptive scene complexity based on service quality |
US10614114B1 (en) | 2017-07-10 | 2020-04-07 | Gopro, Inc. | Systems and methods for creating compilations based on hierarchical clustering |
US10642463B2 (en) | 2010-06-28 | 2020-05-05 | Randall Lee THREEWITS | Interactive management system for performing arts productions |
CN112218102A (en) * | 2020-08-29 | 2021-01-12 | 上海量明科技发展有限公司 | Video content package making method, client and system |
US10936173B2 (en) | 2012-03-06 | 2021-03-02 | Apple Inc. | Unified slider control for modifying multiple image properties |
US11109117B2 (en) * | 2012-06-21 | 2021-08-31 | Amazon Technologies, Inc. | Unobtrusively enhancing video content with extrinsic data |
US11200910B2 (en) | 2019-06-28 | 2021-12-14 | International Business Machines Corporation | Resolution of edit conflicts in audio-file development |
US11205458B1 (en) * | 2018-10-02 | 2021-12-21 | Alexander TORRES | System and method for the collaborative creation of a final, automatically assembled movie |
US20220150294A1 (en) * | 2020-11-10 | 2022-05-12 | At&T Intellectual Property I, L.P. | System for socially shared and opportunistic content creation |
ES2924782A1 (en) * | 2022-06-08 | 2022-10-10 | Digital Stock Next S L | PROCEDURE AND DIGITAL PLATFORM FOR THE ONLINE CREATION OF AUDIOVISUAL PRODUCTION CONTENT (Machine-translation by Google Translate, not legally binding) |
US20220337638A1 (en) * | 2021-04-20 | 2022-10-20 | Lakshminath Reddy Dondeti | System and method for creating collaborative videos (collabs) together remotely |
US11481238B2 (en) * | 2019-08-07 | 2022-10-25 | Vineet Gandhi | Methods and systems of automatic one click virtual button with AI assist for DIY animation |
US20230004283A1 (en) * | 2021-06-30 | 2023-01-05 | At&T Intellectual Property I, L.P. | System for fan-based creation and composition of cross-franchise content |
WO2023005194A1 (en) * | 2021-07-29 | 2023-02-02 | 北京达佳互联信息技术有限公司 | Video generating method and electronic device |
US11800186B1 (en) * | 2022-06-01 | 2023-10-24 | At&T Intellectual Property I, L.P. | System for automated video creation and sharing |
US11861535B2 (en) * | 2020-12-15 | 2024-01-02 | Dubedub Co., Ltd. | Method and system for online contents registration and transaction based on user active selection |
Families Citing this family (21)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20120251083A1 (en) * | 2011-03-29 | 2012-10-04 | Svendsen Jostein | Systems and methods for low bandwidth consumption online content editing |
US10739941B2 (en) | 2011-03-29 | 2020-08-11 | Wevideo, Inc. | Multi-source journal content integration systems and methods and systems and methods for collaborative online content editing |
US20130002532A1 (en) * | 2011-07-01 | 2013-01-03 | Nokia Corporation | Method, apparatus, and computer program product for shared synchronous viewing of content |
US9003287B2 (en) * | 2011-11-18 | 2015-04-07 | Lucasfilm Entertainment Company Ltd. | Interaction between 3D animation and corresponding script |
US8756627B2 (en) * | 2012-04-19 | 2014-06-17 | Jumpercut, Inc. | Distributed video creation |
US11748833B2 (en) | 2013-03-05 | 2023-09-05 | Wevideo, Inc. | Systems and methods for a theme-based effects multimedia editing platform |
US11348616B2 (en) | 2013-11-26 | 2022-05-31 | Google Llc | Collaborative video editing in a cloud environment |
US9860578B2 (en) * | 2014-06-25 | 2018-01-02 | Google Inc. | Methods, systems, and media for recommending collaborators of media content based on authenticated media content input |
US9870581B1 (en) | 2014-09-30 | 2018-01-16 | Google Inc. | Content item element marketplace |
US9442906B2 (en) * | 2014-10-09 | 2016-09-13 | Wrap Media, LLC | Wrap descriptor for defining a wrap package of cards including a global component |
US20160103821A1 (en) | 2014-10-09 | 2016-04-14 | Wrap Media, LLC | Authoring tool for the authoring of wrap packages of cards |
US9600464B2 (en) * | 2014-10-09 | 2017-03-21 | Wrap Media, LLC | Authoring tool for the authoring of wrap packages of cards |
US10817583B2 (en) * | 2015-02-20 | 2020-10-27 | Disney Enterprises, Inc. | Systems and methods for non-linear content creation |
US9600803B2 (en) | 2015-03-26 | 2017-03-21 | Wrap Media, LLC | Mobile-first authoring tool for the authoring of wrap packages |
US9582917B2 (en) * | 2015-03-26 | 2017-02-28 | Wrap Media, LLC | Authoring tool for the mixing of cards of wrap packages |
US10452874B2 (en) | 2016-03-04 | 2019-10-22 | Disney Enterprises, Inc. | System and method for identifying and tagging assets within an AV file |
CN107105319A (en) * | 2017-06-06 | 2017-08-29 | 上海极链网络科技有限公司 | Deliver the resource module of interaction systems in real time applied to live scene |
CN107124636A (en) * | 2017-06-06 | 2017-09-01 | 上海极链网络科技有限公司 | Deliver the display module of interaction systems in real time applied to live scene |
US11683290B1 (en) * | 2020-07-15 | 2023-06-20 | Glossi, Inc. | System for producing e-commerce product videos |
US20230062137A1 (en) * | 2021-08-27 | 2023-03-02 | Logitech Europe S.A. | Method and apparatus for simultaneous video editing |
US20230216898A1 (en) * | 2022-01-05 | 2023-07-06 | On24, Inc. | Methods, Systems, And Apparatuses For Improved Content Creation And Synchronization |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20060064644A1 (en) * | 2004-09-20 | 2006-03-23 | Joo Jin W | Short-term filmmaking event administered over an electronic communication network |
US20090094039A1 (en) * | 2007-10-04 | 2009-04-09 | Zhura Corporation | Collaborative production of rich media content |
US20090164902A1 (en) * | 2007-12-19 | 2009-06-25 | Dopetracks, Llc | Multimedia player widget and one-click media recording and sharing |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7782363B2 (en) | 2000-06-27 | 2010-08-24 | Front Row Technologies, Llc | Providing multiple video perspectives of activities through a data network to a remote multimedia server for selective display by remote viewing audiences |
US7836389B2 (en) | 2004-04-16 | 2010-11-16 | Avid Technology, Inc. | Editing system for audiovisual works and corresponding text for television news |
US7590997B2 (en) | 2004-07-30 | 2009-09-15 | Broadband Itv, Inc. | System and method for managing, converting and displaying video content on a video-on-demand platform, including ads used for drill-down navigation and consumer-generated classified ads |
-
2011
- 2011-10-27 US US13/283,575 patent/US8341525B1/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20060064644A1 (en) * | 2004-09-20 | 2006-03-23 | Joo Jin W | Short-term filmmaking event administered over an electronic communication network |
US20090094039A1 (en) * | 2007-10-04 | 2009-04-09 | Zhura Corporation | Collaborative production of rich media content |
US20090164902A1 (en) * | 2007-12-19 | 2009-06-25 | Dopetracks, Llc | Multimedia player widget and one-click media recording and sharing |
Cited By (175)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8888494B2 (en) | 2010-06-28 | 2014-11-18 | Randall Lee THREEWITS | Interactive environment for performing arts scripts |
US10642463B2 (en) | 2010-06-28 | 2020-05-05 | Randall Lee THREEWITS | Interactive management system for performing arts productions |
US9870134B2 (en) | 2010-06-28 | 2018-01-16 | Randall Lee THREEWITS | Interactive blocking and management for performing arts productions |
US9904666B2 (en) | 2010-06-28 | 2018-02-27 | Randall Lee THREEWITS | Interactive environment for performing arts scripts |
US9122656B2 (en) | 2010-06-28 | 2015-09-01 | Randall Lee THREEWITS | Interactive blocking for performing arts scripts |
US10360945B2 (en) | 2011-08-09 | 2019-07-23 | Gopro, Inc. | User interface for editing digital media objects |
US9171179B2 (en) * | 2011-12-19 | 2015-10-27 | J. Michael Miller | System and method for the provision of multimedia materials |
US20130159708A1 (en) * | 2011-12-19 | 2013-06-20 | J. Michael Miller | System and method for the provision of multimedia materials |
US9992556B1 (en) * | 2011-12-29 | 2018-06-05 | Amazon Technologies, Inc. | Automated creation of storyboards from screenplays |
US9106812B1 (en) * | 2011-12-29 | 2015-08-11 | Amazon Technologies, Inc. | Automated creation of storyboards from screenplays |
US11119635B2 (en) | 2012-03-06 | 2021-09-14 | Apple Inc. | Fanning user interface controls for a media editing application |
US10942634B2 (en) | 2012-03-06 | 2021-03-09 | Apple Inc. | User interface tools for cropping and straightening image |
US10936173B2 (en) | 2012-03-06 | 2021-03-02 | Apple Inc. | Unified slider control for modifying multiple image properties |
US10282055B2 (en) | 2012-03-06 | 2019-05-07 | Apple Inc. | Ordered processing of edits for a media editing application |
US10545631B2 (en) | 2012-03-06 | 2020-01-28 | Apple Inc. | Fanning user interface controls for a media editing application |
US20130238724A1 (en) * | 2012-03-06 | 2013-09-12 | Apple Inc. | Sharing images from image viewing and editing application |
US9591181B2 (en) * | 2012-03-06 | 2017-03-07 | Apple Inc. | Sharing images from image viewing and editing application |
US10552016B2 (en) | 2012-03-06 | 2020-02-04 | Apple Inc. | User interface tools for cropping and straightening image |
US11481097B2 (en) | 2012-03-06 | 2022-10-25 | Apple Inc. | User interface tools for cropping and straightening image |
US20130272673A1 (en) * | 2012-03-13 | 2013-10-17 | Lee Eugene Swearingen | System and method for guided video creation |
US9998722B2 (en) * | 2012-03-13 | 2018-06-12 | Tapshot, Inc. | System and method for guided video creation |
US11109117B2 (en) * | 2012-06-21 | 2021-08-31 | Amazon Technologies, Inc. | Unobtrusively enhancing video content with extrinsic data |
US20140047035A1 (en) * | 2012-08-07 | 2014-02-13 | Quanta Computer Inc. | Distributing collaborative computer editing system |
US20150380052A1 (en) * | 2012-12-12 | 2015-12-31 | Crowdflik, Inc. | Method and system for capturing, synchronizing, and editing video from a primary device and devices in proximity to the primary device |
US20170076751A9 (en) * | 2012-12-12 | 2017-03-16 | Crowdflik, Inc. | Method and system for capturing, synchronizing, and editing video from a primary device and devices in proximity to the primary device |
US10347288B2 (en) * | 2012-12-12 | 2019-07-09 | Crowdflik, Inc. | Method and system for capturing, synchronizing, and editing video from a primary device and devices in proximity to the primary device |
GB2525559A (en) * | 2013-02-27 | 2015-10-28 | Randall Lee Threewits | Interactive environment for performing arts scripts |
WO2014134282A1 (en) * | 2013-02-27 | 2014-09-04 | Randall Lee Threewits | Interactive environment for performing arts scripts |
US9251359B2 (en) | 2013-03-07 | 2016-02-02 | Nokia Technologies Oy | Method and apparatus for managing crowd sourced content creation |
EP2775703A1 (en) * | 2013-03-07 | 2014-09-10 | Nokia Corporation | Method and apparatus for managing crowd sourced content creation |
US10122982B2 (en) * | 2013-05-21 | 2018-11-06 | Sony Corporation | Post production replication of optical processing for digital cinema cameras using metadata |
US20140348489A1 (en) * | 2013-05-21 | 2014-11-27 | Sony Corporation | Post production replication of optical processing for digital cinema cameras using metadata |
US10347286B2 (en) * | 2013-07-25 | 2019-07-09 | Ssh Communications Security Oyj | Displaying session audit logs |
US10817842B2 (en) * | 2013-08-30 | 2020-10-27 | Drumwave Inc. | Systems and methods for providing a collective post |
US20150067058A1 (en) * | 2013-08-30 | 2015-03-05 | RedDrummer LLC | Systems and methods for providing a collective post |
US10008239B2 (en) * | 2013-09-08 | 2018-06-26 | Kayihan ERIS | System of automated script generation with integrated video production |
AU2021202992B2 (en) * | 2013-09-08 | 2022-09-08 | Kayihan ERIS | System of Automated Script Generation With Integrated Video Production |
US11039046B2 (en) * | 2013-09-08 | 2021-06-15 | Kayihan ERIS | System of automated script generation with integrated video production |
US20190141217A1 (en) * | 2013-09-08 | 2019-05-09 | Kayihan ERIS | System of automated script generation with integrated video production |
US20160225409A1 (en) * | 2013-09-08 | 2016-08-04 | Kayihan ERIS | System of automated script generation with integrated video production |
US20170134450A1 (en) * | 2013-11-11 | 2017-05-11 | Amazon Technologies, Inc. | Multiple stream content presentation |
US10315110B2 (en) | 2013-11-11 | 2019-06-11 | Amazon Technologies, Inc. | Service for generating graphics object data |
US10601885B2 (en) | 2013-11-11 | 2020-03-24 | Amazon Technologies, Inc. | Adaptive scene complexity based on service quality |
US10374928B1 (en) | 2013-11-11 | 2019-08-06 | Amazon Technologies, Inc. | Efficient bandwidth estimation |
US10778756B2 (en) | 2013-11-11 | 2020-09-15 | Amazon Technologies, Inc. | Location of actor resources |
US10347013B2 (en) | 2013-11-11 | 2019-07-09 | Amazon Technologies, Inc. | Session idle optimization for streaming server |
US10257266B2 (en) | 2013-11-11 | 2019-04-09 | Amazon Technologies, Inc. | Location of actor resources |
US10097596B2 (en) * | 2013-11-11 | 2018-10-09 | Amazon Technologies, Inc. | Multiple stream content presentation |
US9712597B2 (en) | 2014-01-06 | 2017-07-18 | Htc Corporation | Media data processing method and non-transitory computer readable storage medium thereof |
EP2892015A1 (en) * | 2014-01-06 | 2015-07-08 | HTC Corporation | Media data processing method and non-transitory computer readable storage medium thereof |
US9754159B2 (en) | 2014-03-04 | 2017-09-05 | Gopro, Inc. | Automatic generation of video from spherical content using location-based metadata |
US9760768B2 (en) | 2014-03-04 | 2017-09-12 | Gopro, Inc. | Generation of video from spherical content using edit maps |
US10084961B2 (en) | 2014-03-04 | 2018-09-25 | Gopro, Inc. | Automatic generation of video from spherical content using audio/visual analysis |
US10042830B2 (en) * | 2014-05-07 | 2018-08-07 | Scripto Enterprises Llc. | Writing and production methods, software, and systems |
US20150324345A1 (en) * | 2014-05-07 | 2015-11-12 | Scripto Enterprises LLC | Writing and production methods, software, and systems |
WO2016005835A1 (en) * | 2014-07-10 | 2016-01-14 | Dna Paths Inc. | Aggregating and sharing an assembly of elements pertaining to a final creative work |
US9984293B2 (en) | 2014-07-23 | 2018-05-29 | Gopro, Inc. | Video scene classification by activity |
US10339975B2 (en) | 2014-07-23 | 2019-07-02 | Gopro, Inc. | Voice-based video tagging |
US11069380B2 (en) | 2014-07-23 | 2021-07-20 | Gopro, Inc. | Scene and activity identification in video summary generation |
US10074013B2 (en) | 2014-07-23 | 2018-09-11 | Gopro, Inc. | Scene and activity identification in video summary generation |
US11776579B2 (en) | 2014-07-23 | 2023-10-03 | Gopro, Inc. | Scene and activity identification in video summary generation |
US9792502B2 (en) | 2014-07-23 | 2017-10-17 | Gopro, Inc. | Generating video summaries for a video using video summary templates |
US10776629B2 (en) | 2014-07-23 | 2020-09-15 | Gopro, Inc. | Scene and activity identification in video summary generation |
US10643663B2 (en) | 2014-08-20 | 2020-05-05 | Gopro, Inc. | Scene and activity identification in video summary generation based on motion detected in a video |
US10192585B1 (en) | 2014-08-20 | 2019-01-29 | Gopro, Inc. | Scene and activity identification in video summary generation based on motion detected in a video |
US10192583B2 (en) | 2014-10-10 | 2019-01-29 | Samsung Electronics Co., Ltd. | Video editing using contextual data and content discovery using clusters |
US20160133294A1 (en) * | 2014-11-08 | 2016-05-12 | Wooshii Ltd | Video creation platform |
US9754624B2 (en) * | 2014-11-08 | 2017-09-05 | Wooshii Ltd | Video creation platform |
US10096341B2 (en) | 2015-01-05 | 2018-10-09 | Gopro, Inc. | Media identifier generation for camera-captured media |
US9734870B2 (en) | 2015-01-05 | 2017-08-15 | Gopro, Inc. | Media identifier generation for camera-captured media |
US10559324B2 (en) | 2015-01-05 | 2020-02-11 | Gopro, Inc. | Media identifier generation for camera-captured media |
US9966108B1 (en) | 2015-01-29 | 2018-05-08 | Gopro, Inc. | Variable playback speed template for video editing application |
US10186012B2 (en) | 2015-05-20 | 2019-01-22 | Gopro, Inc. | Virtual lens simulation for video and photo cropping |
US10817977B2 (en) | 2015-05-20 | 2020-10-27 | Gopro, Inc. | Virtual lens simulation for video and photo cropping |
US10395338B2 (en) | 2015-05-20 | 2019-08-27 | Gopro, Inc. | Virtual lens simulation for video and photo cropping |
US11688034B2 (en) | 2015-05-20 | 2023-06-27 | Gopro, Inc. | Virtual lens simulation for video and photo cropping |
US10679323B2 (en) | 2015-05-20 | 2020-06-09 | Gopro, Inc. | Virtual lens simulation for video and photo cropping |
US10535115B2 (en) | 2015-05-20 | 2020-01-14 | Gopro, Inc. | Virtual lens simulation for video and photo cropping |
US10529052B2 (en) | 2015-05-20 | 2020-01-07 | Gopro, Inc. | Virtual lens simulation for video and photo cropping |
US10529051B2 (en) | 2015-05-20 | 2020-01-07 | Gopro, Inc. | Virtual lens simulation for video and photo cropping |
US11164282B2 (en) | 2015-05-20 | 2021-11-02 | Gopro, Inc. | Virtual lens simulation for video and photo cropping |
US9894393B2 (en) | 2015-08-31 | 2018-02-13 | Gopro, Inc. | Video encoding for reduced streaming latency |
US20170076752A1 (en) * | 2015-09-10 | 2017-03-16 | Laura Steward | System and method for automatic media compilation |
US10186298B1 (en) | 2015-10-20 | 2019-01-22 | Gopro, Inc. | System and method of generating video from video clips based on moments of interest within the video clips |
US9721611B2 (en) | 2015-10-20 | 2017-08-01 | Gopro, Inc. | System and method of generating video from video clips based on moments of interest within the video clips |
US10789478B2 (en) | 2015-10-20 | 2020-09-29 | Gopro, Inc. | System and method of providing recommendations of moments of interest within video clips post capture |
US10204273B2 (en) | 2015-10-20 | 2019-02-12 | Gopro, Inc. | System and method of providing recommendations of moments of interest within video clips post capture |
US11468914B2 (en) | 2015-10-20 | 2022-10-11 | Gopro, Inc. | System and method of generating video from video clips based on moments of interest within the video clips |
US10748577B2 (en) | 2015-10-20 | 2020-08-18 | Gopro, Inc. | System and method of generating video from video clips based on moments of interest within the video clips |
US11238520B2 (en) | 2016-01-04 | 2022-02-01 | Gopro, Inc. | Systems and methods for generating recommendations of post-capture users to edit digital media content |
US10423941B1 (en) | 2016-01-04 | 2019-09-24 | Gopro, Inc. | Systems and methods for generating recommendations of post-capture users to edit digital media content |
US9761278B1 (en) * | 2016-01-04 | 2017-09-12 | Gopro, Inc. | Systems and methods for generating recommendations of post-capture users to edit digital media content |
US10095696B1 (en) | 2016-01-04 | 2018-10-09 | Gopro, Inc. | Systems and methods for generating recommendations of post-capture users to edit digital media content field |
US11049522B2 (en) | 2016-01-08 | 2021-06-29 | Gopro, Inc. | Digital media editing |
US10607651B2 (en) | 2016-01-08 | 2020-03-31 | Gopro, Inc. | Digital media editing |
US10109319B2 (en) | 2016-01-08 | 2018-10-23 | Gopro, Inc. | Digital media editing |
US9812175B2 (en) | 2016-02-04 | 2017-11-07 | Gopro, Inc. | Systems and methods for annotating a video |
US11238635B2 (en) | 2016-02-04 | 2022-02-01 | Gopro, Inc. | Digital media editing |
US10083537B1 (en) | 2016-02-04 | 2018-09-25 | Gopro, Inc. | Systems and methods for adding a moving visual element to a video |
US10769834B2 (en) | 2016-02-04 | 2020-09-08 | Gopro, Inc. | Digital media editing |
US10565769B2 (en) | 2016-02-04 | 2020-02-18 | Gopro, Inc. | Systems and methods for adding visual elements to video content |
US10424102B2 (en) | 2016-02-04 | 2019-09-24 | Gopro, Inc. | Digital media editing |
KR102056998B1 (en) * | 2016-03-14 | 2019-12-17 | 텐센트 테크놀로지(센젠) 컴퍼니 리미티드 | Matching method for partners in joint performance video, and terminal, and computer readable storage medium |
JP2019506035A (en) * | 2016-03-14 | 2019-02-28 | テンセント・テクノロジー・(シェンジェン)・カンパニー・リミテッド | Matching method and terminal of partner in co-star video and computer-readable storage medium |
US10380427B2 (en) * | 2016-03-14 | 2019-08-13 | Tencent Technology (Shenzhen) Company Limited | Partner matching method in costarring video, terminal, and computer readable storage medium |
EP3432156A4 (en) * | 2016-03-14 | 2019-01-23 | Tencent Technology (Shenzhen) Company Limited | Matching method for partners in costarring video, and terminal and computer-readable storage medium |
US10628677B2 (en) | 2016-03-14 | 2020-04-21 | Tencent Technology (Shenzhen) Company Limited | Partner matching method in costarring video, terminal, and computer readable storage medium |
US9972066B1 (en) | 2016-03-16 | 2018-05-15 | Gopro, Inc. | Systems and methods for providing variable image projection for spherical visual content |
US10740869B2 (en) | 2016-03-16 | 2020-08-11 | Gopro, Inc. | Systems and methods for providing variable image projection for spherical visual content |
US11398008B2 (en) | 2016-03-31 | 2022-07-26 | Gopro, Inc. | Systems and methods for modifying image distortion (curvature) for viewing distance in post capture |
US10817976B2 (en) | 2016-03-31 | 2020-10-27 | Gopro, Inc. | Systems and methods for modifying image distortion (curvature) for viewing distance in post capture |
US10402938B1 (en) | 2016-03-31 | 2019-09-03 | Gopro, Inc. | Systems and methods for modifying image distortion (curvature) for viewing distance in post capture |
US10341712B2 (en) | 2016-04-07 | 2019-07-02 | Gopro, Inc. | Systems and methods for audio track selection in video editing |
US9838731B1 (en) | 2016-04-07 | 2017-12-05 | Gopro, Inc. | Systems and methods for audio track selection in video editing with audio mixing option |
US9794632B1 (en) | 2016-04-07 | 2017-10-17 | Gopro, Inc. | Systems and methods for synchronization based on audio track changes in video editing |
US11470335B2 (en) | 2016-06-15 | 2022-10-11 | Gopro, Inc. | Systems and methods for providing transcoded portions of a video |
US10250894B1 (en) | 2016-06-15 | 2019-04-02 | Gopro, Inc. | Systems and methods for providing transcoded portions of a video |
US9922682B1 (en) | 2016-06-15 | 2018-03-20 | Gopro, Inc. | Systems and methods for organizing video files |
US10645407B2 (en) | 2016-06-15 | 2020-05-05 | Gopro, Inc. | Systems and methods for providing transcoded portions of a video |
US9998769B1 (en) | 2016-06-15 | 2018-06-12 | Gopro, Inc. | Systems and methods for transcoding media files |
US10045120B2 (en) | 2016-06-20 | 2018-08-07 | Gopro, Inc. | Associating audio with three-dimensional objects in videos |
US10185891B1 (en) | 2016-07-08 | 2019-01-22 | Gopro, Inc. | Systems and methods for compact convolutional neural networks |
US10812861B2 (en) | 2016-07-14 | 2020-10-20 | Gopro, Inc. | Systems and methods for providing access to still images derived from a video |
US10469909B1 (en) | 2016-07-14 | 2019-11-05 | Gopro, Inc. | Systems and methods for providing access to still images derived from a video |
US11057681B2 (en) | 2016-07-14 | 2021-07-06 | Gopro, Inc. | Systems and methods for providing access to still images derived from a video |
US10395119B1 (en) | 2016-08-10 | 2019-08-27 | Gopro, Inc. | Systems and methods for determining activities performed during video capture |
US9836853B1 (en) | 2016-09-06 | 2017-12-05 | Gopro, Inc. | Three-dimensional convolutional neural networks for video highlight detection |
US10268898B1 (en) | 2016-09-21 | 2019-04-23 | Gopro, Inc. | Systems and methods for determining a sample frame order for analyzing a video via segments |
US10282632B1 (en) | 2016-09-21 | 2019-05-07 | Gopro, Inc. | Systems and methods for determining a sample frame order for analyzing a video |
US10354008B2 (en) * | 2016-10-07 | 2019-07-16 | Productionpro Technologies Inc. | System and method for providing a visual scroll representation of production data |
WO2018071557A1 (en) * | 2016-10-12 | 2018-04-19 | Lr Acquisition, Llc | Media creation based on sensor-driven events |
US10002641B1 (en) | 2016-10-17 | 2018-06-19 | Gopro, Inc. | Systems and methods for determining highlight segment sets |
US10923154B2 (en) | 2016-10-17 | 2021-02-16 | Gopro, Inc. | Systems and methods for determining highlight segment sets |
US10643661B2 (en) | 2016-10-17 | 2020-05-05 | Gopro, Inc. | Systems and methods for determining highlight segment sets |
US10284809B1 (en) | 2016-11-07 | 2019-05-07 | Gopro, Inc. | Systems and methods for intelligently synchronizing events in visual content with musical features in audio content |
US10560657B2 (en) | 2016-11-07 | 2020-02-11 | Gopro, Inc. | Systems and methods for intelligently synchronizing events in visual content with musical features in audio content |
US10546566B2 (en) | 2016-11-08 | 2020-01-28 | Gopro, Inc. | Systems and methods for detecting musical features in audio content |
US10262639B1 (en) | 2016-11-08 | 2019-04-16 | Gopro, Inc. | Systems and methods for detecting musical features in audio content |
US10534966B1 (en) | 2017-02-02 | 2020-01-14 | Gopro, Inc. | Systems and methods for identifying activities and/or events represented in a video |
US10339443B1 (en) | 2017-02-24 | 2019-07-02 | Gopro, Inc. | Systems and methods for processing convolutional neural network operations using textures |
US10776689B2 (en) | 2017-02-24 | 2020-09-15 | Gopro, Inc. | Systems and methods for processing convolutional neural network operations using textures |
US10127943B1 (en) | 2017-03-02 | 2018-11-13 | Gopro, Inc. | Systems and methods for modifying videos based on music |
US10679670B2 (en) | 2017-03-02 | 2020-06-09 | Gopro, Inc. | Systems and methods for modifying videos based on music |
US10991396B2 (en) | 2017-03-02 | 2021-04-27 | Gopro, Inc. | Systems and methods for modifying videos based on music |
US11443771B2 (en) | 2017-03-02 | 2022-09-13 | Gopro, Inc. | Systems and methods for modifying videos based on music |
US10185895B1 (en) | 2017-03-23 | 2019-01-22 | Gopro, Inc. | Systems and methods for classifying activities captured within images |
US11282544B2 (en) | 2017-03-24 | 2022-03-22 | Gopro, Inc. | Systems and methods for editing videos based on motion |
US10083718B1 (en) | 2017-03-24 | 2018-09-25 | Gopro, Inc. | Systems and methods for editing videos based on motion |
US10789985B2 (en) | 2017-03-24 | 2020-09-29 | Gopro, Inc. | Systems and methods for editing videos based on motion |
US10187690B1 (en) | 2017-04-24 | 2019-01-22 | Gopro, Inc. | Systems and methods to detect and correlate user responses to media content |
US10395122B1 (en) | 2017-05-12 | 2019-08-27 | Gopro, Inc. | Systems and methods for identifying moments in videos |
US10614315B2 (en) | 2017-05-12 | 2020-04-07 | Gopro, Inc. | Systems and methods for identifying moments in videos |
US10817726B2 (en) | 2017-05-12 | 2020-10-27 | Gopro, Inc. | Systems and methods for identifying moments in videos |
US10614114B1 (en) | 2017-07-10 | 2020-04-07 | Gopro, Inc. | Systems and methods for creating compilations based on hierarchical clustering |
US10402698B1 (en) | 2017-07-10 | 2019-09-03 | Gopro, Inc. | Systems and methods for identifying interesting moments within videos |
US10402656B1 (en) | 2017-07-13 | 2019-09-03 | Gopro, Inc. | Systems and methods for accelerating video analysis |
CN108268432A (en) * | 2017-12-14 | 2018-07-10 | 中央电视台 | A kind of conversion method and device of program project file |
US10834478B2 (en) * | 2017-12-29 | 2020-11-10 | Dish Network L.L.C. | Methods and systems for an augmented film crew using purpose |
US10453496B2 (en) * | 2017-12-29 | 2019-10-22 | Dish Network L.L.C. | Methods and systems for an augmented film crew using sweet spots |
US11398254B2 (en) | 2017-12-29 | 2022-07-26 | Dish Network L.L.C. | Methods and systems for an augmented film crew using storyboards |
US20190208287A1 (en) * | 2017-12-29 | 2019-07-04 | Dish Network L.L.C. | Methods and systems for an augmented film crew using purpose |
US20190206439A1 (en) * | 2017-12-29 | 2019-07-04 | Dish Network L.L.C. | Methods and systems for an augmented film crew using storyboards |
US11343594B2 (en) | 2017-12-29 | 2022-05-24 | Dish Network L.L.C. | Methods and systems for an augmented film crew using purpose |
US10783925B2 (en) * | 2017-12-29 | 2020-09-22 | Dish Network L.L.C. | Methods and systems for an augmented film crew using storyboards |
US11205458B1 (en) * | 2018-10-02 | 2021-12-21 | Alexander TORRES | System and method for the collaborative creation of a final, automatically assembled movie |
US11200910B2 (en) | 2019-06-28 | 2021-12-14 | International Business Machines Corporation | Resolution of edit conflicts in audio-file development |
US11481238B2 (en) * | 2019-08-07 | 2022-10-25 | Vineet Gandhi | Methods and systems of automatic one click virtual button with AI assist for DIY animation |
CN112218102A (en) * | 2020-08-29 | 2021-01-12 | 上海量明科技发展有限公司 | Video content package making method, client and system |
US20220150294A1 (en) * | 2020-11-10 | 2022-05-12 | At&T Intellectual Property I, L.P. | System for socially shared and opportunistic content creation |
US11861535B2 (en) * | 2020-12-15 | 2024-01-02 | Dubedub Co., Ltd. | Method and system for online contents registration and transaction based on user active selection |
US20220337638A1 (en) * | 2021-04-20 | 2022-10-20 | Lakshminath Reddy Dondeti | System and method for creating collaborative videos (collabs) together remotely |
US20230004283A1 (en) * | 2021-06-30 | 2023-01-05 | At&T Intellectual Property I, L.P. | System for fan-based creation and composition of cross-franchise content |
WO2023005194A1 (en) * | 2021-07-29 | 2023-02-02 | 北京达佳互联信息技术有限公司 | Video generating method and electronic device |
US11800186B1 (en) * | 2022-06-01 | 2023-10-24 | At&T Intellectual Property I, L.P. | System for automated video creation and sharing |
ES2924782A1 (en) * | 2022-06-08 | 2022-10-10 | Digital Stock Next S L | PROCEDURE AND DIGITAL PLATFORM FOR THE ONLINE CREATION OF AUDIOVISUAL PRODUCTION CONTENT (Machine-translation by Google Translate, not legally binding) |
Also Published As
Publication number | Publication date |
---|---|
US8341525B1 (en) | 2012-12-25 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US8341525B1 (en) | System and methods for collaborative online multimedia production | |
US20130151970A1 (en) | System and Methods for Distributed Multimedia Production | |
US10936168B2 (en) | Media presentation generating system and method using recorded splitscenes | |
US11862198B2 (en) | Synthesizing a presentation from multiple media clips | |
US20190281334A1 (en) | Systems and methods of customized television programming over the internet | |
US8006189B2 (en) | System and method for web based collaboration using digital media | |
US20150302893A1 (en) | Storyboard-directed video production from shared and individualized assets | |
Vihavainen et al. | We want more: human-computer collaboration in mobile social video remixing of music concerts | |
WO2007146111A2 (en) | Systems and methods of customized television programming over the internet | |
Sutherland et al. | Producing Videos that Pop | |
Vannini | Ethnographic film | |
Oleszkowicz-Gałka et al. | Introduction to the Language and Economics of Film and Television Production | |
Cecil et al. | Video Production | |
JP2023546754A (en) | Conversion of text into dynamic video objects | |
Thomas | Producing a Scheduled TV Magazine Based on Voluntary Student Work: Metro TV | |
Van Tassel et al. | Managing the Production Process | |
Houshangi | Film Production Studio in Finland: Business plan of Sleeping Panda Films | |
Porter | Million dollar movies: art meets commerce in a unique filmmaking competition that embraces the concept of branded entertainment | |
King | The key lessons learnt from producing the ABC programme Talking Heads a talk show/documentary hybrid in a fast turnaround environment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: STARSVU CORPORATION, CALIFORNIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:ACHOUR, MAHA;ACHOUR, SAMY;ANARINO, DOUGLAS;SIGNING DATES FROM 20111101 TO 20111103;REEL/FRAME:027217/0389 |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
AS | Assignment |
Owner name: ACHOUR, MAHA, CALIFORNIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:STARSVU CORPORATION;REEL/FRAME:034984/0350 Effective date: 20141226 |
|
REMI | Maintenance fee reminder mailed | ||
FPAY | Fee payment |
Year of fee payment: 4 |
|
SULP | Surcharge for late payment | ||
FEPP | Fee payment procedure |
Free format text: MAINTENANCE FEE REMINDER MAILED (ORIGINAL EVENT CODE: REM.); ENTITY STATUS OF PATENT OWNER: SMALL ENTITY |
|
MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 8TH YR, SMALL ENTITY (ORIGINAL EVENT CODE: M2552); ENTITY STATUS OF PATENT OWNER: SMALL ENTITY Year of fee payment: 8 |