FR3044815A1 - Video editing method by selecting timeline moments - Google Patents

Video editing method by selecting timeline moments Download PDF

Info

Publication number
FR3044815A1
FR3044815A1 FR1561751A FR1561751A FR3044815A1 FR 3044815 A1 FR3044815 A1 FR 3044815A1 FR 1561751 A FR1561751 A FR 1561751A FR 1561751 A FR1561751 A FR 1561751A FR 3044815 A1 FR3044815 A1 FR 3044815A1
Authority
FR
France
Prior art keywords
video content
editing
extract
application
original video
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
FR1561751A
Other languages
French (fr)
Inventor
Thierry Teyssier
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Actvt
Original Assignee
Actvt
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Actvt filed Critical Actvt
Priority to FR1561751A priority Critical patent/FR3044815A1/en
Priority claimed from PCT/EP2016/079553 external-priority patent/WO2017093467A1/en
Publication of FR3044815A1 publication Critical patent/FR3044815A1/en
Application status is Pending legal-status Critical

Links

Classifications

    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B27/00Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
    • G11B27/02Editing, e.g. varying the order of information signals recorded on, or reproduced from, record carriers
    • G11B27/031Electronic editing of digitised analogue information signals, e.g. audio or video signals

Abstract

A method of editing video content using a device, such as a mobile phone, having access to original video content, and executing a video content editing application, wherein: - the user is allowed selecting in the original video content, using the application executed on the device, at least one key moment, placing a time marker on said content, and - generating an edited video sequence automatically from an extract of the original video content whose duration corresponds to a first predefined time period before the time marker and to a second predefined time after this marker.

Description

Video editing method by selection of point moments

The present invention relates to methods for editing and managing video content using a device, such as a mobile phone, having access to original video content, and executing a video content editing application. The invention also relates to an application for the implementation of such methods.

US Patent 6,970,639 discloses a video content editing system using a preselected editing model and content metadata recognition tools, including among others a marker of an interesting moment in the content.

The application US 2006/0233514 describes a system for recording a marker associated with an interesting moment of a video content, in order to easily find the image corresponding to this marker.

Application EP 1 524 667 discloses the editing of video content according to very specific theme models, such as an anniversary or a wedding, to be selected beforehand by the user in order to obtain the desired effect.

The application WO 2000/039997 describes the editing of video contents using media data coming from databases, and models associated with these data.

The application WO 2006/065223 describes the automatic production of audiovisual content from several available sequences relating to the same subject, for example for journalists having several recordings of the same scene.

It is known from application EP 1 241 673 to propose a generic video content editing system, where the user can manually control certain parameters. No preview is possible.

The application WO 2013/150176 discloses a system for generating a video sequence from rushes provided by the user, using the selected parameters and the choices of the latter during a previous use. The video content editor of the GoPro brand contains templates made of cells, the user being able to assign a cell to each sequence to generate final video content.

Video content editing applications are known, such Magisto or Replay, where the editing is done by the device, completely automatically, based on arbitrary criteria, which allows no creative control, and can lose all rhythm to the edited video. In addition, regardless of the size of the original video content, it is downloaded in its entirety onto the device, which can be time consuming, and editing is done locally in the device. Basic use of strict editing templates does not automatically match each video content and rendering desired. In addition, known publishing applications using strict templates often require user-specific actions to try to best fit a template to original video content.

There is a need to allow a user to benefit from an easy-to-use video content editing application for producing edited video sequences of high creative quality and low bandwidth usage. The invention aims to meet this need and it achieves, in one aspect, through a method of editing video content using a device, such as a mobile phone, having access to content original video, and running a video content editing application, in which method: - the user is allowed to select from the original video content, using the application running on the device at least one key moment, placing a time marker on said content, and - automatically generating an edited video sequence, from an extract of the original video content whose duration corresponds to a first period of time predefined before the time marker and at a second predefined time after this marker.

The edited video sequence is advantageously generated by a remote server to which the device is connected via a telecommunications network. The extract of the original video content may be transmitted by the application to the remote server in a definition less than the definition of the original video content. This allows a quick feedback loop between the user and the application.

The video content editing method according to the invention preferably comprises the following steps, the apparatus having access to at least one original video sequence: the original video sequence is analyzed in order to extract predefined characteristics, a restricted list of possible editing models selected in an edition template database is established based on said predefined characteristics of the original video sequence, and an edited video sequence is generated automatically by applying to the original video sequence a list of editing decisions specific to a user-selected editing model from the restricted list of models established in the previous step.

The original video sequence advantageously corresponds to an extract of the original video content whose duration corresponds to a first predefined period of time before the time marker and to a second predefined time after this marker.

The original video sequence is advantageously analyzed by said remote server to which the device is connected through said telecommunications network, the edited video sequence being generated by the server.

The edited video sequence is advantageously published on the telecommunications network to which the device is connected, in particular via the application or the remote server. According to another of its aspects, another subject of the invention is a video content management method for editing them, using a remote server of a telecommunications network, and an application for editing video content executable on a network. apparatus, such as a mobile telephone, for connection to said remote server over the telecommunications network and having access to original video content, wherein: - the application transmits at least one preselected content extract original video to the remote server in a definition less than the definition of the original video content, - the server applies at least one editing operation to the received preselected extract and generates an intermediate edited video sequence, - the server transmits to the device this intermediate edited video sequence, - the intermediate video sequence generated by the server is previewed on the device to a preview definition less than the definition of the original video content, then - a final edited video sequence is generated by the server on the basis of the intermediate video sequence and made available on the telecommunications network a publication definition greater than said lower definition of the preselected extract.

After acceptance of the user of the intermediate edited video sequence, the final edited video sequence is made available through the application on the telecommunications network, in particular after its generation by the remote server to the publication definition. By "making available", it is to be understood that the final edited video sequence is made available for any use by the user, in particular a download for saving on the device, for example for later modification of the sequence to the using a third-party software, a backup in a cloud, or a publication for private or public sharing on one or more social networks, such as Facebook® or Instagram®. The video content editing application for implementing the method according to the invention enables its users to select the important moments of their video contents and then, automatically, to generate a video sequence edited in a short format, ready to be shared on social networks.

Thanks to the invention, the user selects the content he wants to share and / or keep by placing only a time marker for each important moment to highlight. It is not necessary for the user to select the beginning and the end of the scene to be highlighted. The invention greatly limits the time spent by the user to select the desired rush. The invention aims to share moments, so the sequence to be put forward is preferably short, for example less than 10 s. The establishment of a short list of possible editing models selected from a base of editing templates based on predefined characteristics of the original video sequence allows flexibility in the automatic generation of edited contents. The templates of the model database have been pre-established, and are well adapted to many constraints from the available sequence, such as the length of the sequence, the moments to highlight, or the rhythm of the tape. his chosen. The use of previously established editing models makes it possible to bring the skills of experienced filmmakers into the application according to the invention, thus providing a high quality of editing.

The method according to the invention implementing the transmission of an extract to a lower definition for editing also makes it possible to limit the costs in terms of bandwidth. A quick download of the extract to edit is possible. The final edited video sequence can be downloaded later, for example by a Wi-Fi wireless connection.

Key moment

Preferably, the user is allowed to select the key moment by placing the time marker via a single action, including tactile action. Since the device has a touch screen, the time marker can be placed on the original video content by touching a predefined area of the screen.

In a variant where the apparatus comprises a microphone, the placement of the time marker on the original video content is triggered by means of a sound emitted by the user.

In yet another variation, the placement of the time marker on the original video content is triggered by pressing a key, such as a keyboard or a mouse.

The first and second predefined periods of time may be of equal duration, the edited video sequence being generated from an extract of the original video content whose duration is centered on the time marker of the selected key moment.

The first and second predefined periods of time may each be between 5 s and 300 s, for example being equal to about 25 s. In the case where the original video content is not long enough to fully extract the desired duration, for example if the key moment is chosen 2 seconds before the end of the original video content, the first and second time is advantageously shortened by the application to limit the video content available.

Several key moments can be selected by the user if necessary, the marker placement operation then being repeated several times, in particular between 2 and 50 times, for example 10 times.

The user can be allowed, using the application, to modify the temporal order of the original video content extracts corresponding to the different key moments selected to generate the edited video sequence.

Editing templates

By "publishing models" is meant a list of possible and adaptable editing decisions. For example: a list of possible effects among which one or more effects can be selected to edit the edited video sequence; a list of possible transitions between the selected extracts; the definition of the theoretical times allocated to each of the video sequences as a function of the number of extracts available, each of the theoretical times being adapted as a function of the duration of the selected extracts and / or characteristics extracted from the extracts. An edit template may contain application-specific constraints: minimum available time in the original video content, minimum or maximum amount of motion, minimum or maximum brightness.

By "publishing decision", it is necessary to understand all the parameters describing the application of an effect, transition or change of plan; for example the start and end times of each sequence, the type of effect, the time marking the beginning of the effect, the duration of this effect, the speed, the transparency, the colors chosen for this effect, possibly a text to display through the effect, or in general any parameter specific to the effect considered. The set of editing decisions makes it possible to describe in a unique and reproducible way a specific assembly to operate.

The predefined characteristics to be extracted from the original video sequence are preferably technical characteristics of the video, such as in particular the resolution, the number of frames per second, the level of compression or the orientation, and / or are representative of video colorimetry, including brightness, saturation, contrast, or predominant colors.

As a variant or in combination, the predefined characteristics to be extracted from the original video sequence are representative of the movements present in the sequence, in particular their direction and their speed.

As a further variant or in combination, the predefined characteristics to be extracted from the original video sequence are representative of a division into scenes of the sequence.

The predefined characteristics to be extracted from the original video sequence may be representative of the sound present in the sequence, including the resolution, the sampling frequency and / or the level of compression of the sound, or the general sound environment, or the recognition of particular sounds, for example one or more voices or music. The extraction of many predefined features is not only used to filter the list of available models, but also to multiply the number of possibilities. The shortlisting of possible editing models selected from the publishing model database can be done by filtering said database by applying the specific application constraints defined for each model.

The models of edition present in the base of models are preferably created beforehand, by describing and generalizing specific montages realized manually to reinforce a specific emotion.

The model base is advantageously accessible by the remote server and / or the device via the telecommunications network.

The editing decision list specific to an editing model advantageously comprises the desired start and end times of the video sequence, and / or a video playback speed mode, in particular accelerated or slowed, and / or or the types of transitions in the sequence and the start and end times of each transition, and / or the types of effects to be included in the sequence, such as a change in colorimetry, an overlay of information, including text or still or animated image (s), and the start and end times of each effect, and / or parameters describing the associated video editing constraints, such as the minimum and maximum numbers of original sequences, the minimum and / or maximum duration of each sequence, and / or the definition of particular adaptation rules of the model, such as the modification of the start and end times of each sequence as a function of the speed of the movement, the anal yse of scenes, changes of speed or direction of movements of the different objects identified on each sequence, and / or the duration of the sequences.

As a variant or in combination, the list of editing decisions specific to an editing model includes the addition of a soundtrack on the edited video sequence, associated in particular with acceptable moments of cutoff of said piece.

The generation of the edited video sequence may take into account additional parameters entered by the user on the device, in particular by means of an application for editing video content executable on the device, in particular the kind of music desired. , additional effects choices, colorimetric and / or stabilization filters to be applied to the original video sequence, or data to be embedded in the sequence, such as for example a title, a place name, a set GPS coordinates, GPS position or metrics from sensors, such as, for example, speed, altitude, acceleration, depth, heart rate, elevation gain, or meteorological data.

The analysis method for extracting predefined characteristics may be repeated for several original video sequences corresponding to extracts of the same original video content. The temporal order of the extracts for generating the edited video sequence is advantageously chosen by the user using the apparatus. Each extract is preferably analyzed separately. The list of editing decisions specific to the selected edition model is then advantageously applied to all compiled extracts.

The edition decision list specific to an editing template may include an indication of how to link several clips of the same original video content, including the relative duration between two clips.

An intermediate edited video sequence may be generated by the remote server in a preview definition less than the definition of the original video content from which the original video sequence originates, before the final edited video sequence is made available at the end. editing operations on the telecommunications network to a publication definition greater than the preview definition.

Remote Server and Definitions

By "remote server", it is necessary to understand a computer system, for example hosted at the service provider of the video content editing service according to the invention, or relocated to a data center provider, or "datacenters" in English. The remote server advantageously comprises the computer code necessary for the proper operation of the video content editing application according to the invention

Since several snippets of the original video content are selected and transmitted at the lower definition to the remote server to apply to them at least one editing operation, the application preferably transmits to the remote server the video sequences edited at the publication definition for the generation of the final edited video sequence. The lower definition snippet can be generated by the application by compressing the original snippet to reach said lower definition.

The preview definition can be 5 to 20 times smaller than the original definition, better 10 to 15 times smaller.

The publication definition of the final edited video sequence can be 5 to 20 times greater than the preview definition, better 10 to 15 times higher.

The preview definition can be between 240p and 480p (number of lines), the original definition being notably between the definition 720p and the definition 4K, corresponding to a resolution of 4096 χ 2160 pixels. The preview definition is preferably a function of the total size of the extract to be transmitted, which is in particular less than 8Mo, better less than 4Mo. The definition chosen for the extract to be transmitted may depend on the bit rate of the telecommunication network used. In addition, the preview definition is preferably a function of the definition of the original video content as well as the available bit rate between the device and the server.

The edited video clip can be previewed several times at the preview setting before the final edited video clip is released.

The duration of the extract transmitted to the lower definition corresponds to a first predefined period of time before a time marker placed on the original video content through the application and designating a key moment selected beforehand by the user. , and at a second predefined time after this marker.

Device The device is preferably a smartphone. In variants, the device is a personal digital assistant, a tablet or a computer, fixed or portable. The screen of the device is advantageously tactile. The camera may include a camera, the original video content having been recorded by the camera of the camera. Alternatively, the original video content is from another camera and has been downloaded by the device. In another variant, the original video content is available from the cloud and is downloaded on demand from the cloud. In another variant, the original video content comes from another camera accessible through a wireless network, for example Wifi®, and is downloaded on demand. The apparatus is programmed to allow the implementation of the methods according to the invention through the application.

Application Another object of the invention is, according to another of its aspects, a video content editing application comprising code instructions executable on a device, such as a mobile phone, comprising a processor and a memory, these instructions when executed to implement the video content editing method according to the invention, the apparatus having access to an original video content, the application being configured to: - place at least one time marker on said original video content corresponding to the selection by the user of a key moment, and - to enable the automated generation, via a possible remote server, of an edited video sequence, from an extract of the video content original whose duration corresponds to a first predefined time period before the time marker and to a second predefined period of time after this marker. Since the apparatus has access to at least one original video sequence, the video content editing application is preferably configured to allow, via a possible remote server: - an analysis of the video sequence of origin to extract predefined characteristics, - the establishment of a restricted list of possible editing models chosen from a model base according to said predefined characteristics of the original video sequence, and - the generation of an automated manner of a video sequence edited at least by applying to the original video sequence a list of editing decisions specific to a user-selected editing model, from the restricted list of models established in the previous step. Since the apparatus is intended to be connected and to exchange data with a remote server over a telecommunications network, and having access to original video content, the application is preferably configured to download and / or transmitting to the remote server at least one preselected snippet of the original video content in a definition less than the definition of the original video content, in order to allow preview to a preview definition lower than that of the original video content of at least one video sequence edited by at least one editing operation applied by the remote server to the extract of the original video content transmitted by the application to said lower definition, before the final edited video sequence is made available on the telecommunications network to a publication definition greater than the definition of preview ion.

The features defined above for editing and managing video content methods apply to the application. The application can be configured to be downloaded beforehand into an application market available from the device, for example an Android®, Apple® or Windows® application market. The application can also be supplied loaded on a computer readable medium, for example a USB key, an SD card, a CD-ROM, or be pre-installed on the device by the manufacturer.

Data exchange method Another object of the invention is, according to another of its aspects, a method of exchanging data between a remote server of a telecommunications network and an application for editing video contents executable on a network. apparatus, such as a mobile telephone, for connection to said remote server over the telecommunications network and having access to original video content, for implementing the video content management method according to one of the any one of the preceding claims, wherein: - the application generates a preselected extract of the original video content in a definition less than the definition of the original video content, in particular by compressing the original extract to reach said definition lower, - the application transmits the preselected extract to the lower definition to the remote server through the re telecommunication bucket, - the remote server generates from the extracted extract an intermediate edited video sequence by applying at least one editing operation, - the application downloads from the server the edited intermediate video sequence to a preview definition for user preview, - after accepting the user of the edited intermediate video sequence, the remote server generates the final edited video sequence at a publication definition greater than the preview definition, and - the server makes available the final video sequence to the publication definition, in particular to download, publish or save it on the telecommunications network.

Since several extracts of the original video content are selected and transmitted at a definition lower than that of the original video content by the application to the remote server to apply to them at least one editing operation, the application transmits advantageously to the remote server the video sequences edited in the publication definition for the generation of the final edited video sequence.

The characteristics defined above for the methods of editing and managing video contents and for the application apply to the data exchange method. The invention will be better understood on reading the following description, non-limiting examples of implementation thereof, and on examining the appended drawing, in which: FIG. schematic various steps in the implementation of the video content editing method according to the invention, - Figure 2 schematically illustrates various elements for the implementation of the video content editing method according to the invention, - FIG. 3 represents a screenshot of the apparatus illustrating an example of operation of the video content editing application according to the invention; FIG. 4 represents the creation of an extract of the original video content; according to the invention, - Figure 5 schematically illustrates different steps in the implementation of the video content editing method according to the invention, - Figure 6 shows a screenshot of the app. are illustrating another example of operation of the video content editing application according to the invention, and - Figure 7 schematically illustrates different steps in the implementation of the data exchange method according to the invention.

FIG. 1 shows different steps of an exemplary video content editing method according to the invention, and in FIG. 2 various elements enabling the implementation of the method.

In the example described, the method according to the invention implements a device 2, which is a mobile phone called "smartphone" in English in this example, having access to original video content V0, and executing an application 4 video content editing, and a remote server 3 to which the device 2 is connected via a telecommunications network.

In a step 11, the user chooses an original video content Y0 that he wishes to edit and opens it on the video content editing application 4 according to the invention. During a step 12, the user selects in the original video content V0 a key moment, placing a time marker Mt on said content V0 through the application 4 executed on the device 2, via a single action.

In a step 13, the user can decide to select at least one other key moment, the step 12 of placing the time marker Mt is then repeated.

In a step 14, the application 4 automatically generates an extract Ve of the original video content V0, the duration of which corresponds to a first predefined period of time before the time marker Mt and at a second predefined period of time. t2 after this marker Mt.

Preferably and as in the example described, in step 15, the application 4 transmits the extract Ve to the remote server 3 in a definition less than the definition of the original video content V0. The server 3 advantageously applies at least one editing operation to the received extract Ve, at least by applying to it a list of editing decisions specific to a user-selected edition model from a restricted list of models established. according to predefined characteristics extracted from the extract Ve, and generates an intermediate edited video sequence Vf

During a step 16, the server 3 transmits to the apparatus 2 this edited intermediate video sequence Vi, where it is previewed at a preview definition Dp less than the definition D0 of the original video content V0. The user can accept the rendering of this intermediate edited video sequence Vi, or, in a step 17, he can choose another editing operation, by selecting in particular another editing model.

In a step 18, after acceptance of the user of the intermediate edited video sequence Vi, a final edited video sequence Vf is generated by the server 3 on the basis of the sequence Vi, to a publication definition Dpu greater than said Dp preview definition.

In a step 19, the final edited video sequence Vf is made available on the telecommunications network to the publication definition, for example to be saved on the device 2, for example for later modification of the Vf sequence using third-party software, or to be stored in the cloud, or to be published for private or public sharing on one or more social networks.

In the example described, as shown in Figure 3, the time marker Mt is placed on the original video content Vo by touching a predefined area of the touch screen 2a of the apparatus 2, designated by the arrow. In a variant not shown, the time marker Mt is placed on the original video content Vo by means of a sound emitted by the user in the case where the apparatus 2 comprises a microphone 2b. In another variant, not illustrated, where the apparatus 2 is a computer, the placement of the time marker Mt on the original video content V0 is triggered by pressing a key, for example the keyboard or the mouse of the device 2.

In the example shown in FIG. 4, the first and second predefined time periods t 1 and t 2 are of equal duration, the duration of the extract Ve is centered on the time marker Mt of the selected key moment.

As described above, in the case where several key moments are selected, the user, using the application, can modify the temporal order of the extracts corresponding to the different key moments selected to generate the final edited video sequence Vf.

Figure 5 describes in detail step 15 of Figure 1.

In the example described, during a step 21, the extract Ve is analyzed by the remote server 3 to extract predefined characteristics. As previously described, these predefined characteristics are technical characteristics of the video, and / or are representative of the movements present in the extract, and / or of cutting into scenes of the extract, and / or of the sound present.

In a step 22, a restricted list of possible editing templates selected from an edit template database is established based on said predefined characteristics of the extract Ve. In step 23, as shown in FIG. 6, an editing model is selected by the user from the restricted list of models established in the preceding step, according to the desired effect, for example a fast or slow rhythm. video sequence, the presence of slow motion or accelerated. The user can view multiple results from the restricted list of models, from which he can choose the result that suits him best, or modify the input parameters provided.

In a step 24, the server 3 automatically generates an edited video sequence at least by applying to the original video sequence a list of editing decisions specific to the user-selected editing model, such as previously described. In the case of slow video content, for example a dive session, the models applying slow motion are preferably discarded as well as those providing the appearance of faces. On the contrary, in the case of a fast video content, for example ski jumping, the models applying accelerated are advantageously discarded. However, the user can visualize the results of other models offering this type of effect in the context of models aimed at proposing editing editions said to be more "engaged" or "absurd". The generation of the edited video sequence may take into account additional parameters entered by the user on the application 4.

FIG. 7 illustrates various steps of data exchange between the apparatus 2 and the remote server 3, through the application 4.

During a step 31, the user chooses a key moment in an original video content V0, as described above. In step 32, the application 4 generates an extract Ve from this key moment, and transmits, in step 33, this extract to the remote server 3 in a definition Dp less than the definition D0 of the video content of origin V0, compressing, in the example described, the original extract to reach this lower definition.

Steps 31b and 33b are identical to steps 31 and 33 for at least one other extract also selected by the user. Editing operations can be applied to this snippet, which can be included in the final edited video clip.

In a step 34, the server 3 applies at least one editing operation to the received pre-selected extract V 1 and generates an edited intermediate video sequence V 1. The application 4, in a step 35, downloads from the server 3 the intermediate edited video sequence Vi to a preview definition Dp less than the definition of the original video content.

In a step 36, the intermediate video sequence V; generated by server 3 is previewed on device 2 at the Dp preview definition.

During a possible step 37, the user modifies the generation parameters of the edited video sequence, in particular by changing the editing model. These parameters are transmitted, in a step 38, to the server 3 by the application 4.

In a step 39, the server 3 generates a second intermediate sequence, and transmits it, during the step 40, to the apparatus 2, which displays it in the step 41 through the application 4.

After accepting the user of the intermediate edited video sequence Vi in step 42, the application 4 transmits this agreement to the server 3 during a step 43.

In the case where several extracts have been selected, the server 3 generates a list of extracts necessary for the preparation of the final edited video sequence Vf, and transmits it to the application 4, according to steps 44 and 45. For each From this list, the time intervals tr and t2 to be extracted around the moment Mt are preferably smaller than the time periods t 1 and t 2 used to generate Ve. This allows the server to offer different editing results, having a large range of sequence around the selected time of Mt. Once the final sequence is chosen, the server 3 preferably indicates to the application 4 which exact part sequence is required, and the application only passes to the publication definition the extracts strictly necessary for the final video sequence.

During a step 46, the application then generates all the VepU extracts to the publication definition Dpu while respecting the time frames tr and tr requested by the server 3. The application 4 transmits all the Vepu extracts to the server 3, according to a step 47.

The server 3 generates, on the basis of the intermediate video sequence Vi, the final edited video sequence Vf at a publication definition Dpu greater than the preview definition Dp, during a step 48. In the case where several extracts have been selected by the user and edited, the final edited video sequence Vf includes all of these Vepu snippets.

During a step 49, the server 3 transmits to the application 4 the final edited video sequence Vf to the publication definition Dpu, for provision on the telecommunications network.

Of course, the invention is not limited to the embodiments which have just been described, the characteristics of which can be combined within non-illustrated variants.

The method of exchanging data between an application and a remote server according to the invention can be used for surveillance purposes by video cameras, the preview at a lower definition making it possible in particular to easily choose the moment when an individual passes in front of the camera, in order to apply facial recognition only to this excerpt from the recording. The use of automatic scene detection can be used to improve the method of editing video content according to the invention, for example to detect scenes featuring people as opposed to wide landscape shots, inputting and / or outputting a subject in the course of video, or for detecting the type of shot taken, for example a still shot, a shot shot, a shot taken with an on-board camera, or the speed of movement of the camera, or the movements involuntary camera.

Claims (17)

  1. A method of editing video content using a device (2), such as a mobile phone, having access to original video content (V0), and executing an application (4) for editing video content, method in which: - the user is allowed to select in the original video content (V0), using the application (4) executed on the device (2), at least one key moment, placing a time stamp (Mt) on said content (V0), and - automatically generating an edited video sequence (Vi, Vf), from an extract (Ve) of the original video content (V0) whose duration corresponds to a first predefined time (ti) before the time marker (Mt) and to a second predefined time (t2) after this marker.
  2. 2. The method of claim 1, wherein the edited video sequence (Vi, Vf) is generated by a remote server (3) to which the apparatus (2) is connected through a telecommunications network.
  3. 3. The method of claim 1 or 2, wherein the user is allowed to select the key moment by placing the time marker (Mt) via a single action, including a tactile action.
  4. 4. Method according to any one of claims 1 to 3, wherein, the apparatus (2) having a touch screen (2a), the time marker (Mt) is placed on the original video content (V0) in touching a predefined area of the screen (2a).
  5. 5. Method according to any one of claims 1 to 3, wherein, the apparatus (2) having a microphone (2b), the placement of the time marker (Mt) on the original video content (V0) is triggered by means of a sound emitted by the user.
  6. The method of any one of claims 1 to 3, wherein placing the time stamp (Mt) on the original video content (V0) is triggered by a key press, including a keyboard or mouse.
  7. A method according to any one of the preceding claims, wherein the first and second predefined periods of time (t 1, t 2) are of equal duration, the edited video sequence (V 1, V 1) being generated from an extract (Ve) original video content (Y0) whose duration is centered on the time marker (Mt) of the selected key moment.
  8. 8. A method according to any one of the preceding claims, wherein the first and second predefined periods of time (t 1, t 2) are each between 5 s and 300 s.
  9. 9. The method as claimed in claim 1, wherein a plurality of key moments are selected, the operation of placing the time marker (Mt) being repeated several times, in particular between 2 and 50 times.
  10. 10. Method according to the preceding claim, wherein the user, using the application (4), is allowed to modify the temporal order of the extracts (Ve) of the corresponding original video content (V0). at different key moments selected to generate the edited video sequence (Vi, Vf).
  11. 11. A method according to any one of the preceding claims, wherein, the apparatus (2) having access to a database of video content editing models, the extract (Ye) is analyzed to extract predefined characteristics, a restricted list of possible editing models selected from the model database is made based on said predefined characteristics of the extract, and an edited final video sequence (Vf) is automatically generated at least by applying to the extract ( Ve) a list of editing decisions specific to a user-selected editing model from the restricted list of models established in the previous step.
  12. 12. Method according to the preceding claim, wherein the predefined characteristics to extract from the extract (Ve) are technical characteristics of the video, such as in particular the resolution, the number of frames per second, the compression level or the orientation, and / or are representative of the colorimetry of the video, in particular the brightness, the saturation, the contrast, or the predominant colors, and / or are representative of the movements present in the extract (Ve), in particular their direction and their speed, and / or are representative of a division into scenes of the extract (Ve), and / or are representative of the sound present in the extract (Ye), in particular the resolution, the sampling frequency and / or the sound compression level, or the general sound environment, or the recognition of particular sounds, for example one or more voices or music.
  13. A method according to any one of the two immediately preceding claims, wherein the edition decision list specific to an editing model includes the desired start and end times of the video sequence, and / or a the speed of playback of the video sequence, in particular accelerated or slowed motion, and / or the types of transitions in the sequence and the start and end times of each transition, and / or the types of effects to be included in the sequence, such as that a change of colorimetry, an overlay of information, in particular text or still or animated image (s), and the start and end times of each effect, and / or parameters describing the video editing constraints associated, such as the minimum and maximum number of original sequences, the minimum and / or maximum duration of each extract (Ye), and / or the definition of particular adaptation rules of the model, such as the modification of the times of beginning and end of each extract (Ve) according to the speed of the movement, the analysis of scenes, changes of speed or direction of the movements of the different objects identified on each extract (Ve), and / or the duration of the extracts (Ye), and / or comprises a soundtrack to be added to the edited video sequence, associated in particular with acceptable moments of cutoff of said piece.
  14. A method according to any one of the three immediately preceding claims, wherein generating the final edited video sequence (Vf) using the editing decision list specific to the selected editing model takes into account additional parameters entered by the user on the application, including the desired genre of music, additional effects choices, color and / or stabilization filters to be applied to the video sequence, or data to be embedded in the sequence, such as by example, a title, a place name, a set of GPS coordinates, a GPS position, or metrics from sensors, such as, for example, speed, altitude, acceleration, depth, heart rate, elevation, or meteorological data.
  15. The method of any one of claims 2 to 14, wherein the extract of the original video content is transmitted by the application to the remote server in a definition less than the definition of the original video content.
  16. 16. The method as claimed in any one of the preceding claims, in which the edited video sequence (Vf) is published on a telecommunications network to which the apparatus (2) is connected, in particular via the application (4) or the remote server (3).
  17. 17. Application (4) for editing video contents comprising executable code instructions on a device (2), such as a mobile phone, comprising a processor and a memory, these instructions when executed allowing the implementation of the video content editing method according to one of the preceding claims, the apparatus (2) having access to an original video content (V0), the application (4) being configured to: - place at least one time stamp (Mt) on said original video content (V0) corresponding to the selection by the user of a key moment, and - to enable the automated generation, via a possible remote server (3), of a edited video sequence (Vi, Vf), from an extract (Ve) of the original video content (V0) whose duration corresponds to a first predefined time (ti) before the temporal marker (Mt) and to a second predefined time (t2) after s this marker.
FR1561751A 2015-12-02 2015-12-02 Video editing method by selecting timeline moments Pending FR3044815A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
FR1561751A FR3044815A1 (en) 2015-12-02 2015-12-02 Video editing method by selecting timeline moments

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
FR1561751A FR3044815A1 (en) 2015-12-02 2015-12-02 Video editing method by selecting timeline moments
PCT/EP2016/079553 WO2017093467A1 (en) 2015-12-02 2016-12-02 Method for managing video content for the editing thereof, selecting specific moments and using automatable adaptive models

Publications (1)

Publication Number Publication Date
FR3044815A1 true FR3044815A1 (en) 2017-06-09

Family

ID=55862863

Family Applications (1)

Application Number Title Priority Date Filing Date
FR1561751A Pending FR3044815A1 (en) 2015-12-02 2015-12-02 Video editing method by selecting timeline moments

Country Status (1)

Country Link
FR (1) FR3044815A1 (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2004090898A1 (en) * 2003-04-07 2004-10-21 Internet Pro Video Limited Computer based system for selecting digital media frames
US6954894B1 (en) * 1998-09-29 2005-10-11 Canon Kabushiki Kaisha Method and apparatus for multimedia editing
US8768142B1 (en) * 2012-01-26 2014-07-01 Ambarella, Inc. Video editing with connected high-resolution video camera and video cloud server

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6954894B1 (en) * 1998-09-29 2005-10-11 Canon Kabushiki Kaisha Method and apparatus for multimedia editing
WO2004090898A1 (en) * 2003-04-07 2004-10-21 Internet Pro Video Limited Computer based system for selecting digital media frames
US8768142B1 (en) * 2012-01-26 2014-07-01 Ambarella, Inc. Video editing with connected high-resolution video camera and video cloud server

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
ANONYMOUS: "Actvt.com - The next-gen of mobile video", 8 November 2015 (2015-11-08), XP055300899, Retrieved from the Internet <URL:https://web.archive.org/web/20151108003829/http://actvt.com/> [retrieved on 20160908] *

Similar Documents

Publication Publication Date Title
US20100223128A1 (en) Software-based Method for Assisted Video Creation
US20110276881A1 (en) Systems and Methods for Sharing Multimedia Editing Projects
US20130130216A1 (en) Custom narration of electronic books
US9143601B2 (en) Event-based media grouping, playback, and sharing
US20150058709A1 (en) Method of creating a media composition and apparatus therefore
US20160057188A1 (en) Generating and updating event-based playback experiences
JP5500334B2 (en) Information processing apparatus and method, and program
KR20190107167A (en) Gallery of messages with a shared interest
US9378770B2 (en) Systems and methods of facilitating installment-by-installment consumption of discrete installments of a unitary media program
US20130117671A1 (en) Methods and systems for editing video clips on mobile devices
US9767768B2 (en) Automated object selection and placement for augmented reality
US20090142030A1 (en) Apparatus and method for photographing and editing moving image
JP2015525417A (en) Supplemental content selection and communication
RU2577468C2 (en) Method of sharing digital media content (versions)
US9767850B2 (en) Method for editing multiple video files and matching them to audio files
US9277198B2 (en) Systems and methods for media personalization using templates
US8824645B2 (en) Video messaging systems and methods
US20120185772A1 (en) System and method for video generation
US9117483B2 (en) Method and apparatus for dynamically recording, editing and combining multiple live video clips and still photographs into a finished composition
US8990693B2 (en) System and method for distributed media personalization
US20150370474A1 (en) Multiple view interface for video editing system
US20140380167A1 (en) Systems and methods for multiple device interaction with selectably presentable media streams
US20160358629A1 (en) Interactive real-time video editor and recorder
US20140096002A1 (en) Video clip editing system
US8788584B2 (en) Methods and systems for sharing photos in an online photosession

Legal Events

Date Code Title Description
PLFP Fee payment

Year of fee payment: 2

PLSC Search report ready

Effective date: 20170609