CN111277905A - Online collaborative video editing method and device - Google Patents

Online collaborative video editing method and device Download PDF

Info

Publication number
CN111277905A
CN111277905A CN202010158728.2A CN202010158728A CN111277905A CN 111277905 A CN111277905 A CN 111277905A CN 202010158728 A CN202010158728 A CN 202010158728A CN 111277905 A CN111277905 A CN 111277905A
Authority
CN
China
Prior art keywords
editing
video
terminal
unit
module
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010158728.2A
Other languages
Chinese (zh)
Inventor
徐常亮
吴伟平
廖健
施美红
梁双春
陈凌云
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xinhua Zhiyun Technology Co ltd
Original Assignee
Xinhua Zhiyun Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xinhua Zhiyun Technology Co ltd filed Critical Xinhua Zhiyun Technology Co ltd
Priority to CN202010158728.2A priority Critical patent/CN111277905A/en
Publication of CN111277905A publication Critical patent/CN111277905A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • H04N21/47205End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for manipulating displayed content, e.g. interacting with MPEG-4 objects, editing locally
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/25Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
    • H04N21/254Management at additional data server, e.g. shopping server, rights management server
    • H04N21/2541Rights Management
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Databases & Information Systems (AREA)
  • Computer Security & Cryptography (AREA)
  • Human Computer Interaction (AREA)
  • Television Signal Processing For Recording (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

The disclosure relates to an online collaborative video editing method and device. The online collaborative video editing method comprises the following steps: receiving a video creation request from an editing terminal; responding to the video creating request, creating a target video and returning address information of the target video to the editing terminal; dividing the target video into a plurality of editing units, wherein editing authorities of different editing units can be opened to different editing terminals; receiving editing instructions of different editing terminals to different editing units; and responding to the editing instruction, and simultaneously carrying out video editing on different editing units.

Description

Online collaborative video editing method and device
Technical Field
The present disclosure relates to the field of video technologies, but not limited to the field of video technologies, and in particular, to an online collaborative video editing method and an online collaborative video editing apparatus.
Background
Video is a multimedia technology that aggregates images, audio, and/or special effects. Video editing is to form a video desired by a user by sorting images, clipping, sound effect processing, and the like.
In the prior art, a user can create a video on a mobile phone or a computer and edit the video. But typically only local editing is allowed and only one user is allowed to edit; as such, the efficiency of video editing is rendered inefficient. If the edited material and the edited part need to be forwarded to other people, or the computer storing the material and the edited part is directly sent to other people, so that the communication cost is high.
In short, the conventional video editing technology has various problems such as low editing efficiency and single editing mode.
Disclosure of Invention
The disclosure provides an online collaborative video editing method and device.
A first aspect of the embodiments of the present application provides an online collaborative video editing method, which is applied to a cloud, and the online collaborative video editing method includes:
receiving a video creation request from an editing terminal;
responding to the video creating request, creating a target video and returning address information of the target video to the editing terminal;
dividing the target video into a plurality of editing units, wherein editing authorities of different editing units can be opened to different editing terminals;
receiving editing instructions of different editing terminals to different editing units;
and responding to the editing instruction, and simultaneously carrying out video editing on different editing units.
Based on the above scheme, the video editing for different editing units in response to the editing instruction further includes:
receiving an editing instruction of a first terminal in the editing terminals to a first unit in the editing units;
checking whether the first unit is in a locked state;
and if the first unit is in the locking state edited by a second terminal of the editing terminal, shielding the editing instruction of the first terminal.
Based on the above scheme, the method further comprises:
after an editing instruction of the first terminal is shielded, first prompt information is sent to the first terminal, wherein the first prompt information is used for prompting that the first unit is being edited by other editing terminals.
Based on the above scheme, the video editing for different editing units in response to the editing instruction further includes:
if the first unit is in an unlocked state which is not edited by the second terminal, the first unit is edited by responding to an editing instruction of the first terminal;
and locking the first unit edited by the first terminal to control the first unit to enter a locked state.
Based on the above scheme, the method further comprises:
and when the editing of the first unit by one editing terminal is finished, unlocking the first unit to control the first unit to enter the unlocked state.
Based on the above scheme, the dividing the target video into a plurality of editing units includes:
dividing a plurality of time axes of the target video into a plurality of editing units to be edited according to the time sequence, wherein the time axes comprise: a video axis, a subtitle axis, a music axis, and/or a filter axis;
the method further comprises the following steps:
and acquiring video materials of different editing units.
A second aspect of the present embodiment provides an online collaborative video editing method, which is applied to an editing terminal, and the method includes:
sending a video creating request to a cloud;
receiving target video address information returned based on the video creation request;
receiving a collaborative editing input;
and sending an editing invitation link carrying the target video to other editing terminals based on the address information according to the collaborative editing input, wherein the editing invitation link is used for enabling the other editing terminals and the editing terminal to collaboratively online different editing units of the target video at the same time.
Based on the above scheme, the method further comprises:
sending an editing instruction to the cloud;
receiving first prompt information returned when a first unit in the editing units edited by the editing instruction input is in a locking state edited by other editing terminals;
and outputting the first prompt message.
Based on the above scheme, the method further comprises:
receiving an authority processing input;
and according to the authority processing input, granting the editing authority to other editing terminals, withdrawing the editing authority or modifying the authority range of the editing authority with different authority.
A third aspect of the embodiments of the present application provides an online collaborative video editing apparatus, which is applied to a cloud, where the online collaborative video editing apparatus includes: the system comprises a first receiving module, a creating module, a first sending module, a dividing module and an editing module;
the first receiving module is used for receiving a video creating request from an editing terminal;
the creation module is used for responding to the video creation request and creating a target video;
the first sending module is used for returning the address information of the target video to the editing terminal;
the dividing module is used for dividing the target video into a plurality of editing units, wherein the editing authorities of different editing units can be opened to different editing terminals;
the first receiving module is further configured to receive editing instructions of different editing terminals to different editing units;
and the editing module is used for responding to the editing instruction and simultaneously carrying out video editing on different editing units.
Based on the above scheme, the editing module is specifically configured to receive an editing instruction of a first terminal in the editing terminals to a first unit in the editing units;
checking whether the first unit is in a locked state;
and if the first unit is in the locking state edited by a second terminal of the editing terminal, shielding the editing instruction of the first terminal.
Based on the above scheme, the apparatus further comprises:
after an editing instruction of the first terminal is shielded, first prompt information is sent to the first terminal, wherein the first prompt information is used for prompting that the first unit is being edited by other editing terminals.
Based on the above scheme, the editing module is further specifically configured to respond to an editing instruction of the first terminal to edit the first unit if the first unit is in an unlocked state that is not edited by the second terminal; and locking the first unit edited by the first terminal to control the first unit to enter a locked state.
Based on the above scheme, the apparatus further comprises:
and the unlocking module is used for unlocking the first unit when the editing of the first unit by one editing terminal is finished so as to control the first unit to enter the unlocking state.
Based on the above scheme, the dividing module is configured to divide the multiple time axes of the target video into multiple editing units to be edited according to a time sequence, where the time axes include: a video axis, a subtitle axis, a music axis, and/or a filter axis;
the device further comprises:
and the acquisition module is used for acquiring the video materials of different editing units.
A fourth aspect of the present embodiment provides an online collaborative video editing apparatus, which is applied to an editing terminal, and the apparatus includes:
the second sending module is used for sending a video creating request to the cloud end;
the second receiving module is used for receiving target video address information returned based on the video creating request;
the input module is used for receiving collaborative editing input;
the second sending module is further configured to send, according to the collaborative editing input and based on the address information, an editing invitation link carrying the target video to other editing terminals, where the editing invitation link is used for the other editing terminals and the current editing terminal to collaborate at the same time for online different editing units of the target video.
Based on the scheme, the second sending module is further configured to send an editing instruction to the cloud;
the second receiving module is further configured to receive first prompt information returned when a first unit in the editing units edited by the editing instruction input is in a locked state edited by another editing terminal;
the device, still include:
and the output module is also used for outputting the first prompt message.
Based on the above scheme, the input module is further used for
Receiving an authority processing input;
the device further comprises:
and the authority processing module is used for granting the editing authority to other editing terminals, withdrawing the editing authority or modifying the authority range of the editing authority with different authorization according to the authority processing input.
In a third aspect of the embodiments of the present disclosure, an electronic device is provided, which includes at least: a processor and a memory for storing executable instructions operable on the processor, wherein: and when the processor is used for running the executable instructions, the executable instructions execute the steps in any one of the online collaborative video editing methods.
In a fourth aspect of the embodiments of the present disclosure, a non-transitory computer-readable storage medium is provided, where computer-executable instructions are stored in the computer-readable storage medium, and when executed by a processor, the computer-executable instructions implement the steps in any one of the above-mentioned online collaborative video editing methods.
The technical scheme provided by the embodiment of the disclosure can have the following beneficial effects: when the video is edited, the cloud end can be used for collaborative editing of multiple editing terminals, so that multiple editing terminals can edit the same video at the same time, and therefore the video editing efficiency can be improved compared with that of a single editing terminal (namely a single user) for independently editing one video. Meanwhile, the videos are edited at the cloud end, so that when 2, 3 or more than 3 editing terminals edit the videos cooperatively, video materials do not need to be transmitted to each editing terminal, and therefore the communication cost of cooperatively editing the same target video by the multiple editing terminals is reduced.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the invention and together with the description, serve to explain the principles of the invention.
FIG. 1 is a flow diagram illustrating a method for online collaborative video editing according to an example embodiment;
FIG. 2 is a flow diagram illustrating a method for online collaborative video editing according to an example embodiment;
FIG. 3 is a flow diagram illustrating a method for online collaborative video editing according to an example embodiment;
FIG. 4 is a flowchart illustrating a method of online collaborative video editing according to an example embodiment;
fig. 5 is a schematic structural diagram illustrating an online collaborative video editing apparatus according to an exemplary embodiment;
fig. 6 is a schematic structural diagram illustrating an online collaborative video editing apparatus according to an exemplary embodiment;
fig. 7 is a schematic diagram illustrating a connection between a cloud and a client of an online collaborative video editing method according to an exemplary embodiment;
fig. 8 is a flowchart illustrating an online collaborative video editing method according to an exemplary embodiment.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present invention. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the invention, as detailed in the appended claims.
As shown in fig. 1, this embodiment provides an online collaborative video editing method applied to a cloud, where the online collaborative video editing method includes:
s110: receiving a video creation request from an editing terminal;
s120: responding to the video creating request, creating a target video and returning address information of the target video to the editing terminal;
s130: dividing the target video into a plurality of editing units, wherein editing authorities of different editing units can be opened to different editing terminals;
s140: receiving editing instructions of different editing terminals to different editing units;
s150: and responding to the editing instruction, and simultaneously carrying out video editing on different editing units.
In the embodiment of the application, the cloud is formed by one or more servers located on a network side.
The editing terminal may be a mobile phone, a tablet computer, a wearable device, or a Personal Computer (PC). In some embodiments, the editing terminal herein may be a consumer terminal of a consumer. In other embodiments, the editing terminal can also be dedicated to a dedicated device for video editing.
After a video creation request is received from an editing terminal, at least a video file of a target video is created in the cloud.
Further, the cloud end can create a folder of the target video according to the video creation request, wherein the folder of the target video comprises: video files of target videos, files of video editing materials, configuration information of editing authorities and the like.
After the target video is created, address information is returned to the editing terminal that requested the creation of the target video.
The address information may be a link address of the aforementioned target video or a link address of the target folder.
In one embodiment, in order to implement collaborative online editing of multiple editing terminals, so as to prevent different editing terminals from performing conflicting editing operations on a target video, the target video is divided into multiple editing units. One of the editing units may be a target video segment of a preset duration.
In other embodiments, different materials of the target video may be edited and divided into editing units according to the editing authority. For example, an image editing unit, an audio editing unit, a subtitle editing unit, and/or a special effect editing unit, and the like. The image editing unit is used for editing the video frame; the audio editing unit is used for performing music or dialogue editing of the video. The subtitle editing unit is used for editing subtitles in an image frame of a video. The special effect editing unit is used for editing the special effect of the video.
In the embodiment of the present application, when online collaborative editing is performed, due to the fact that different editing units are edited by different terminals, editing conflicts of a target video can be avoided, for example, completely opposite editing operations are performed on the same video frame, so that collaborative editing is simply and conveniently achieved, editing efficiency is improved, and communication cost between different editing terminals of collaborative editing is reduced.
Further, the method further comprises:
and configuring editing authority for each editing terminal at the cloud.
For example, the cloud may automatically grant the maximum editing right to the editing terminal (or editing account) requesting to create the target video.
For another example, the editing authority is granted according to the editing terminal requesting to create the video or another editing terminal (editing account) other than the editing terminal or the editing account, and the editing authority of the other editing terminal may be equal to or smaller than the editing authority of the editing terminal requesting to create the video. Specifically, for example, according to an instruction of an editing terminal (editing account) requesting to create a video, editing rights of one or more other editing terminals (or editing accounts) are granted. The cloud end can also recycle the editing authority of one or more other editing terminals or modify the editing authority of one or more other editing terminals according to the indication of the editing terminal (or the editing account) requesting to create the target video. For example, the editing authority of other editing terminals to different editing units is modified. If the effect of a certain editing unit reaches the expected effect, a stop instruction of an editing terminal (or an editing account) requesting to create the target video may be received, and the receiving of the instruction will recover the editing authority of other terminals having the editing authority for the specific editing unit.
For example, the authority range of the editing authority may be the editing authority for all editing units or may be the editing authority for a part of editing units.
In some embodiments, the editing authority of the editing terminal may be further classified into levels, for example, when the editing terminal of the first priority and the editing terminal of the second priority edit the same editing unit, the editing of the editing terminal of the second priority is masked in response to the editing of the editing terminal of the first priority. The first priority is higher than the second priority.
In some embodiments, the method further comprises:
after receiving an editing instruction of an editing terminal, verifying whether the editing terminal has an editing right;
after the verification is passed, determining whether to respond to an editing instruction of the editing terminal; and if the verification fails, shielding the editing instruction of the editing terminal, and if the editing instruction of the editing terminal is shielded, sending second prompt information to the corresponding editing terminal, wherein the second prompt information is used for prompting that the corresponding editing terminal does not have the editing authority.
The verification information for verifying the editing right includes, but is not limited to: and editing terminal information or an authorized password of the terminal. The terminal information includes but is not limited to: and the editing account, the communication identifier and/or the equipment identifier used when the editing terminal requests editing. Communication identities include, but are not limited to, communication accounts for cell phones and/or instant messaging, and typical communication accounts for even communication include, but are not limited to: the account number of the payment account is micro-signal, micro-blog number, face book number, payment account number and/or Beijing east account number. The authorized password can be a password issued by the cloud and/or a password configured by an editing terminal requesting to create the target video when the editing right is granted. The device identification includes but is not limited to: international Mobile Equipment Identity (IMEI)).
In some embodiments, the address information of the target video is secret information, and the cloud only issues the address information to the editing terminal with the editing right, so that only the editing terminal with the editing right can search the target video according to the distributed address information and edit the target video. At this time, the cloud can omit the verification of whether the editing terminal has the editing authority, and the operation of the cloud is simplified.
In order to further reduce the editing conflict of different editing terminals during collaborative editing, in an embodiment of the present application, the method further includes:
and setting the state of the editing unit according to whether the editing terminal edits one editing unit.
The state of an editing unit includes: locked and unlocked states.
In the locked state, the corresponding editing unit only allows one editing terminal editing the editing unit to edit, and the editing of the editing unit by other editing terminals is shielded.
In the unlocked state, the first editing terminal is allowed to edit the editing unit, or the editing terminal with the highest priority is allowed to edit the editing unit.
After all the editing units are divided into editing units, the original state of each editing unit is the unlocked state.
As shown in fig. 2, the S150 further includes:
s151: receiving an editing instruction of a first terminal in the editing terminals to a first unit in the editing units;
s152: checking whether the first unit is in a locked state;
s153: if the first unit is in the locking state edited by a second terminal of the editing terminal, the editing instruction of the first terminal is shielded.
After receiving an editing instruction of an editing terminal authorized to edit, the cloud end firstly shields the editing instruction of the editing terminal according to the state of an editing unit (namely a first unit) indicated by the editing instruction if the editing unit is in a locked state, so that the conflicting editing operations of two or more editing terminals on the same editing unit are reduced.
In the embodiment of the present application, the first unit is any one of a plurality of divided editing units. The first terminal and the second terminal are just used for distinguishing two editing terminals. The first terminal may be any terminal authorized to edit the target video, and the second terminal may be any editing terminal other than the first terminal authorized to edit the target video.
Further, the method further comprises:
after an editing instruction of the first terminal is shielded, first prompt information is sent to the first terminal, wherein the first prompt information is used for prompting that the first unit is being edited by other editing terminals.
If the first unit which the first terminal requests to edit is in the locked state, on one hand, an editing instruction of the first terminal is shielded, so that the phenomenon of editing conflict between the first terminal and a second terminal which is editing the first unit is reduced, on the other hand, the cloud end sends first prompt information to the terminal, the first prompt information informs a user that other editing terminals exist or the user is editing the first unit, namely, informs the user of the reason that the editing instruction cannot be responded, and therefore user experience is improved.
In some embodiments, the first prompt information further carries: the identification information of the second unit currently in the unlocked state. The first unit and the second unit are both any editing unit belonging to the editing units, but the first unit and the second unit are different. Therefore, if the first prompt information carries the identification information of the second unit in the unlocked state, the editing terminal with the editing authority of the second unit can conveniently select to edit the second unit in time, and therefore editing conflicts of subsequent editing instructions sent by different editing terminals to the same editing unit are reduced.
In some embodiments, as shown in fig. 3, the S150 may further include:
s154: if the first unit is in an unlocked state which is not edited by the second terminal, the first unit is edited by responding to an editing instruction of the first terminal;
s155: and locking the first unit edited by the first terminal to control the first unit to enter a locked state.
If the first unit is currently in an unlocked state which is not edited by other editing terminals, the editing instruction of the first terminal is responded, and therefore the cloud end edits the first unit according to the editing instruction of the first terminal.
When it is determined that the first unit is to be responded to the edit instruction of the first terminal, the state of the first unit is switched to the locked state.
In some embodiments, the method further comprises:
and when the editing of the first unit by one editing terminal is finished, unlocking the first unit to control the first unit to enter the unlocked state.
After an editing terminal, for example, the first terminal or the second terminal finishes editing the first unit, the state of the first unit is switched, so that the first unit is switched from the locked state to the unlocked state, and the subsequent editing of the first unit by any terminal is facilitated.
In some embodiments, the S130 may include:
dividing a plurality of time axes of the target video into a plurality of editing units to be edited according to the time sequence, wherein the time axes comprise: a video axis, a subtitle axis, a music axis, and/or a filter axis;
the method further comprises the following steps:
and acquiring video materials of different editing units.
The manner of acquiring the video material of different editing units here includes, but is not limited to, at least one of the following:
receiving video material from one or more editing terminals;
selecting video materials stored locally in a cloud according to the indication of one or more editing terminals;
and downloading the video material from the network according to the indication of one or more editing terminals.
Video material herein includes, but is not limited to, at least one of:
image frame, background music, caption, bullet screen content and special effect content.
As shown in fig. 4, this embodiment provides an online collaborative video editing method, which is applied to an editing terminal, and the method includes:
s210: sending a video creating request to a cloud;
s220: receiving target video address information returned based on the video creation request;
s230: receiving a collaborative editing input;
s240: and sending an editing invitation link carrying the target video to other editing terminals based on the address information according to the collaborative editing input, wherein the editing invitation link is used for enabling the other editing terminals and the editing terminal to collaboratively online different editing units of the target video at the same time.
In the embodiment of the present application, the editing terminal may be a consumer terminal of an ordinary consumer, such as a mobile phone, a tablet, or a wearable device.
The editing terminal may be installed with an online video editor, which may be an Application (APP), applet, fast Application or web page type online video editor, for example.
And if the application interface of the online editor receives a user input instruction to create a video, the online video editor sends the video creation request to the cloud. If the cloud creates the target video, the address information of the target video is received. The editing terminal requesting creation of the target video may assign an editing right to the other editing terminal according to the address information.
For example, the editing terminal may directly forward the address information to other editing terminals, so as to authorize the editing rights of other editing terminals.
For another example, the editing terminal sends a configuration instruction of the editing permission to the cloud according to the address information; the configuration command carries the address information and information of other editing terminals.
The address information is used for locating the video to be edited, which is authorized to edit the authority to other editing terminals, by the cloud.
Other editing terminal information includes but is not limited to: and editing the communication identification, the equipment identification or the editing account of the terminal. The editing account can be an account allocated by the cloud.
In some embodiments, the configuration instruction carries the address information and an authorization password. The editing terminal issues the authorization password to other editing terminals authorized to edit the authority, and meanwhile, the authorization password is reported to the cloud, so that the cloud can conveniently verify whether the editing terminal has the authority.
In some embodiments, the method further comprises:
sending an editing instruction to the cloud;
receiving first prompt information returned when a first unit in the editing units edited by the editing instruction input is in a locking state edited by other editing terminals;
and outputting the first prompt message.
When the editing terminal sends an editing instruction to the cloud, a first unit which may request to be edited is in a locking state, and at the moment, the editing terminal receives first prompt information and outputs the first prompt information; thus, the user of the editing terminal will be informed that other users are editing the first unit at this time.
In some embodiments, the method further comprises:
receiving an authority processing input;
and according to the authority processing input, granting the editing authority to other editing terminals, withdrawing the editing authority or modifying the authority range of the editing authority with different authority.
Specifically, for example, an authority line processing input by a User is received at a User Interface (UI) of the online video editor, and then, according to the authority line processing input, the granting, recycling, or modifying of the editing authority is directly performed to other editing terminals. The editing terminal can also directly process input according to the authority and achieve granting, recycling and/or updating of editing authority to other editing terminals through the cloud.
The embodiment of the present application further provides an online collaborative video editing method, which may further include:
receiving an editing authority granted directly or through a cloud service by an editing terminal which creates a target video at a cloud;
and sending an editing instruction to the cloud end by using the editing permission, wherein the editing instruction is used for triggering the cloud end to edit an editing unit of the target video, and only one editing terminal is allowed to edit in one editing unit at the same moment.
In some embodiments, the method further comprises:
when the editing unit requested by the editing instruction is in a locked state, receiving first prompt information returned by the cloud end;
and outputting the first prompt message.
As shown in fig. 5, the present embodiment provides an online collaborative video editing apparatus, which is applied to a cloud, and the online collaborative video editing apparatus includes: a first receiving module 51, a creating module 52, a first transmitting module 53, a dividing module 54 and an editing module 55;
the first receiving module 51 is configured to receive a video creation request from an editing terminal;
the creating module 52 is configured to respond to the video creating request to create a target video;
the first sending module 53 is configured to return address information of a target video to the editing terminal;
the dividing module 54 is configured to divide the target video into a plurality of editing units, where editing permissions of different editing units can be opened to different editing terminals;
the first receiving module is further configured to receive editing instructions of different editing terminals to different editing units;
the editing module 55 is configured to respond to the editing instruction and perform video editing on different editing units at the same time.
In some embodiments, the first receiving module 51, the creating module 52, the first sending module 53, the dividing module 54, and the editing module 55 may be program modules, and the program modules are executed by a processor in the editing terminal to implement the operations of the modules.
In other embodiments, the first receiving module 51, the creating module 52, the first sending module 53, the dividing module 54, and the editing module 55 may be a hard-soft combining module, which may include various programmable arrays. The programmable array includes, but is not limited to: a field programmable array or a complex programmable array.
In still other embodiments, the first receiving module 51, creating module 52, first sending module 53, dividing module 54, and editing module 55 may be pure hardware modules; including but not limited to application specific integrated circuits.
In some embodiments, the editing module 55 is specifically configured to, after receiving an editing instruction of a first terminal in the editing units to a first unit in the editing units;
checking whether the first unit is in a locked state;
and if the first unit is in the locking state edited by a second terminal of the editing terminal, shielding the editing instruction of the first terminal.
In some embodiments, the apparatus further comprises:
after an editing instruction of the first terminal is shielded, first prompt information is sent to the first terminal, wherein the first prompt information is used for prompting that the first unit is being edited by other editing terminals.
In some embodiments, the editing module 55 is further specifically configured to, if the first unit is in an unlocked state that is not edited by the second terminal, edit the first unit in response to an editing instruction of the first terminal; and locking the first unit edited by the first terminal to control the first unit to enter a locked state.
In some embodiments, the apparatus further comprises:
and the unlocking module is used for unlocking the first unit when the editing of the first unit by one editing terminal is finished so as to control the first unit to enter the unlocking state.
In some embodiments, the dividing module 54 is configured to divide the multiple time axes of the target video into multiple editing units to be edited according to a chronological order, where the time axes include: a video axis, a subtitle axis, a music axis, and/or a filter axis;
the device further comprises:
and the acquisition module is used for acquiring the video materials of different editing units.
As shown in fig. 6, this embodiment provides an online collaborative video editing apparatus, which is applied to an editing terminal, and the apparatus includes:
the second sending module 61 is configured to send a video creation request to the cloud;
a second receiving module 62, configured to receive target video address information returned based on the video creation request;
an input module 63, configured to receive collaborative editing input;
the second sending module 61 is further configured to send, according to the collaborative editing input and based on the address information, an editing invitation link carrying the target video to other editing terminals, where the editing invitation link is used by the other editing terminals and the current editing terminal to collaborate with different editing units of the online target video at the same time.
In some embodiments, the second receiving module 62 and the input module 63 may be program modules, and the program modules are executed by a processor in the editing terminal to implement the operations of the modules.
In other embodiments, the second receiving module 62, and the input module 63 may be a hard-soft combining module, which may include various programmable arrays. The programmable array includes, but is not limited to: a field programmable array or a complex programmable array.
In still other embodiments, the second receiving module 62, and the input module 63 may be pure hardware modules; including but not limited to application specific integrated circuits.
In some embodiments, the second sending module 61 is further configured to send an editing instruction to the cloud;
the second receiving module 62 is further configured to receive a first prompt message returned when a first unit in the editing units edited by the editing instruction input is in a locked state edited by another editing terminal;
the device, still include:
and the output module is also used for outputting the first prompt message.
In some embodiments, the input module 63, further processes the input with the reception right;
the device further comprises:
and the authority processing module is used for granting the editing authority to other editing terminals, withdrawing the editing authority or modifying the authority range of the editing authority with different authorization according to the authority processing input.
Two specific examples are provided below in connection with any of the embodiments described above:
example 1:
the present example provides an online collaborative video editing method, which may include:
as shown in fig. 7, an online video editor is provided that provides visualization on APP, applets, web applications, through which a video work can be created based on user input, and invites collaborators to join the editing. The collaborator is the user of the other editing terminal.
The system stores the video works in the cloud, takes the segments on different video axes as editing units, and opens the video works to all collaborators for editing. Therefore, in some cases, the editing unit is segmented according to time, and the editing unit can also be called as a segment.
And locking and unlocking the editing units through a conflict arbitration module, wherein one editing unit enters a locked state after being locked, and the locked editing unit enters an unlocked state after being unlocked.
The cloud judges whether the current editor has the editing authority on the segment or not, so that the situation that different editing behaviors are not in conflict is ensured.
And the terminal synchronizes each editing behavior to the cloud end and displays the editing behavior in real time on an on-end editor. And each collaborator (editing terminal) needs to record each editing, thereby facilitating the tracing and recovery of the work operation.
After the target video is edited, exporting operation can be performed on the video works, and at the moment, the videos are synthesized at the cloud end and downloaded to the terminal.
Referring to fig. 7, the cloud may include:
the account management module is used for managing an editing account, for example, an editing account with editing authority and/or an editing account of an editing terminal of the target video.
The invitation and authority control module is used for authorizing, recovering, updating and other operations of the editing authority;
a production data management module for managing video material, e.g., metadata of a video; meanwhile, the product data management module is further configured to manage the edited video, where the management includes: storage management, compression management, editing management, and/or download management.
And the conflict arbitration module is used for allowing or forbidding one editing terminal to edit a certain editing unit according to the locking state and the unlocking state of the editing unit.
An online video editor visualization module to provide visualization of video online editing, for example, including but not limited to: the provision of a visual editing interface, and/or the visualization of an online preview or play of a video.
Synchronizing project file data in FIG. 7 may include: synchronization of material elements and/or synchronization of the input data of the video being edited or already edited.
The editing terminal (i.e. the client) can interact with the cloud through an applet, a webpage or an APP, including initiating a video creation request and an editing instruction.
Example 2:
as shown in fig. 8, the present example provides an online video editing method, and a basic flow may include:
opening an editor: a user opens an APP/small program/webpage on a terminal such as a mobile phone/computer/tablet and the like, logs in an account and opens a visual online video editor.
Creating a video work: based on the user input, the online video editor creates a video work.
Sending a collaboration invitation to other editing terminals: based on user input, an invitation link can be generated, external collaborators are invited to join in the editing process of the video works, and the added collaborators can be checked, deleted and other management operations
Editing metadata: the basic unit that the user can edit can be regarded as metadata, and the following description of the metadata refers to that each time axis (including but not limited to a video axis, a subtitle axis, a music axis, a filter axis, and the like, and there may be a plurality of time axes) in the video work is taken as an independent and editable metadata. For each metadata, a cloud database exists.
And (3) conflict arbitration: when the user edits the segments on the time axis, a series of collision detections are required:
and if so, prompting a current user that the segment is edited by other people, and not allowing the user to edit, otherwise, allowing the user to edit, wherein the segment needs to be locked.
When the editing of the segment is finished, whether the segment overlaps with the starting and ending time points of other segments on the time axis or not is judged, if so, the segment time conflict is prompted, and a user is guided to modify the segment time, and the current editing is not effective until the segment time does not conflict.
When the user takes effect on the editing of the segment, the lock on the current segment is released, and the editing authority is developed for other collaborators
And (3) data storage: the editing behaviors of the current user are synchronized and stored to the cloud end, and are automatically synchronized to all the terminals, so that other editors can see the latest video works
And (3) operation behavior saving: each operation record of the current file is stored to the cloud end, so that the current file can be traced and restored conveniently
And (3) deriving a video: after the user finishes editing, the operation of exporting the video control is detected, video synthesis is carried out at the cloud end, and the video synthesis is stored at the currently used end of the user.
An embodiment of the present application further provides an electronic device, where the electronic device at least includes: a processor and a memory for storing executable instructions operable on the processor, wherein:
when the processor is configured to execute the executable instructions, the executable instructions perform the steps of the online collaborative video editing method applied in the cloud or any editing terminal, for example, perform the method shown in any one of fig. 1 to 4 and 7.
An embodiment of the present invention further provides a non-transitory computer-readable storage medium, where computer-executable instructions are stored in the computer-readable storage medium, and when executed by a processor, the computer-executable instructions implement the steps in the online collaborative video editing method applied in a cloud or any editing terminal, for example, the method shown in any one of fig. 1 to 4 and 7 is executed.
Referring to fig. 7, apparatus 700 may include one or more of the following components: processing components 701, memory 702, power components 703, multimedia components 704, audio components 705, input/output (I/O) interfaces 706, sensor components 707, and communication components 708.
The processing component 701 generally controls the overall operation of the device 700, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. Processing components 701 may include one or more processors 710 to execute instructions to perform all or a portion of the steps of the methods described above. Further, processing component 701 may also include one or more modules that facilitate interaction between processing component 701 and other components. For example, the processing component 701 may include a multimedia module to facilitate interaction between the multimedia component 704 and the processing component 701.
The memory 710 is configured to store various types of data to support operations at the apparatus 700. Examples of such data include instructions for any application or method operating on device 700, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 702 may be implemented by any type or combination of volatile or non-volatile storage devices, such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
The power supply component 703 provides power to the various components of the device 700. The power supply components 703 may include: a power management system, one or more power sources, and other components associated with generating, managing, and distributing power for the device 700.
The multimedia component 704 includes a screen that provides an output interface between the device 700 and a user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 704 includes a front facing camera and/or a rear facing camera. The front camera and/or the rear camera may receive external multimedia data when the device 700 is in an operation mode, such as a photographing mode or a video mode. Each front camera and/or rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
The audio component 705 is configured to output and/or input audio signals. For example, audio component 705 includes a Microphone (MIC) configured to receive external audio signals when apparatus 700 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may further be stored in the memory 710 or transmitted via the communication component 708. In some embodiments, audio component 705 also includes a speaker for outputting audio signals.
The I/O interface 706 provides an interface between the processing component 701 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor component 707 includes one or more sensors for providing various aspects of state assessment for the apparatus 700. For example, the sensor assembly 707 may detect an open/closed state of the apparatus 700, the relative positioning of components, such as a display and keypad of the apparatus 700, the sensor assembly 707 may also detect a change in position of the apparatus 700 or one of the components of the apparatus 700, the presence or absence of user contact with the apparatus 700, orientation or acceleration/deceleration of the apparatus 700, and a change in temperature of the apparatus 700. The sensor component 707 may include a proximity sensor configured to detect the presence of a nearby object in the absence of any physical contact. The sensor assembly 707 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 707 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 708 is configured to facilitate communication between the apparatus 700 and other devices in a wired or wireless manner. The apparatus 700 may access a wireless network based on a communication standard, such as WiFi, 2G, or 3G, or a combination thereof. In an exemplary embodiment, the communication component 708 receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 708 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, or other technologies.
In an exemplary embodiment, the apparatus 700 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components for performing the above-described methods.
In an exemplary embodiment, a non-transitory computer-readable storage medium comprising instructions, such as the memory 702 comprising instructions, executable by the processor 710 of the apparatus 700 to perform the above-described method is also provided. For example, the non-transitory computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
A non-transitory computer readable storage medium having instructions therein, which when executed by a processor of a mobile terminal, enable the mobile terminal to perform any of the methods provided in the above embodiments.
Other embodiments of the invention will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This application is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the invention and including such departures from the present disclosure as come within known or customary practice within the art to which the invention pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the invention being indicated by the following claims.
It will be understood that the invention is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the invention is limited only by the appended claims.

Claims (20)

1. An online collaborative video editing method is applied to a cloud, and comprises the following steps:
receiving a video creation request from an editing terminal;
responding to the video creating request, creating a target video and returning address information of the target video to the editing terminal;
dividing the target video into a plurality of editing units, wherein editing authorities of different editing units can be opened to different editing terminals;
receiving editing instructions of different editing terminals to different editing units;
and responding to the editing instruction, and simultaneously carrying out video editing on different editing units.
2. The method according to claim 1, wherein the video editing for different editing units simultaneously in response to the editing instruction further comprises:
receiving an editing instruction of a first terminal in the editing terminals to a first unit in the editing units;
checking whether the first unit is in a locked state;
and if the first unit is in the locking state edited by a second terminal of the editing terminal, shielding the editing instruction of the first terminal.
3. The method of claim 2, further comprising:
after an editing instruction of the first terminal is shielded, first prompt information is sent to the first terminal, wherein the first prompt information is used for prompting that the first unit is being edited by other editing terminals.
4. The method according to claim 2, wherein the video editing for different editing units simultaneously in response to the editing instruction further comprises:
if the first unit is in an unlocked state which is not edited by the second terminal, the first unit is edited by responding to an editing instruction of the first terminal;
and locking the first unit edited by the first terminal to control the first unit to enter a locked state.
5. The method of claim 4, further comprising:
and when the editing of the first unit by one editing terminal is finished, unlocking the first unit to control the first unit to enter the unlocked state.
6. The method of claim 1, wherein the dividing the target video into a plurality of editing units comprises:
dividing a plurality of time axes of the target video into a plurality of editing units to be edited according to the time sequence, wherein the time axes comprise: a video axis, a subtitle axis, a music axis, and/or a filter axis;
the method further comprises the following steps:
and acquiring video materials of different editing units.
7. An online collaborative video editing method is applied to an editing terminal, and comprises the following steps:
sending a video creating request to a cloud;
receiving target video address information returned based on the video creation request;
receiving a collaborative editing input;
and sending an editing invitation link carrying the target video to other editing terminals based on the address information according to the collaborative editing input, wherein the editing invitation link is used for enabling the other editing terminals and the editing terminal to collaboratively online different editing units of the target video at the same time.
8. The method of claim 7, further comprising:
sending an editing instruction to the cloud;
receiving first prompt information returned when a first unit in the editing units edited by the editing instruction input is in a locking state edited by other editing terminals;
and outputting the first prompt message.
9. The method according to claim 7 or 8, characterized in that the method further comprises:
receiving an authority processing input;
and according to the authority processing input, granting the editing authority to other editing terminals, withdrawing the editing authority or modifying the authority range of the editing authority with different authority.
10. The utility model provides an online video editing device in coordination which characterized in that is applied to the high in the clouds, online video editing device in coordination includes: the system comprises a first receiving module, a creating module, a first sending module, a dividing module and an editing module;
the first receiving module is used for receiving a video creating request from an editing terminal;
the creation module is used for responding to the video creation request and creating a target video;
the first sending module is used for returning the address information of the target video to the editing terminal;
the dividing module is used for dividing the target video into a plurality of editing units, wherein the editing authorities of different editing units can be opened to different editing terminals;
the first receiving module is further configured to receive editing instructions of different editing terminals to different editing units;
and the editing module is used for responding to the editing instruction and simultaneously carrying out video editing on different editing units.
11. The apparatus according to claim 10, wherein the editing module is specifically configured to, upon receiving an editing instruction of a first terminal in the editing terminals to a first unit in the editing units;
checking whether the first unit is in a locked state;
and if the first unit is in the locking state edited by a second terminal of the editing terminal, shielding the editing instruction of the first terminal.
12. The apparatus of claim 11, further comprising:
after an editing instruction of the first terminal is shielded, first prompt information is sent to the first terminal, wherein the first prompt information is used for prompting that the first unit is being edited by other editing terminals.
13. The apparatus according to claim 12, wherein the editing module is further configured to edit the first unit in response to an edit instruction of the first terminal if the first unit is in an unlocked state that is not edited by the second terminal; and locking the first unit edited by the first terminal to control the first unit to enter a locked state.
14. The apparatus of claim 13, further comprising:
and the unlocking module is used for unlocking the first unit when the editing of the first unit by one editing terminal is finished so as to control the first unit to enter the unlocking state.
15. The apparatus according to claim 10, wherein the dividing module is configured to divide a plurality of timelines of the target video into the plurality of editing units to be edited according to a chronological order, where the timelines include: a video axis, a subtitle axis, a music axis, and/or a filter axis;
the device further comprises:
and the acquisition module is used for acquiring the video materials of different editing units.
16. An online collaborative video editing apparatus, applied to an editing terminal, the apparatus comprising:
the second sending module is used for sending a video creating request to the cloud end;
the second receiving module is used for receiving target video address information returned based on the video creating request;
the input module is used for receiving collaborative editing input;
the second sending module is further configured to send, according to the collaborative editing input and based on the address information, an editing invitation link carrying the target video to other editing terminals, where the editing invitation link is used for the other editing terminals and the current editing terminal to collaborate at the same time for online different editing units of the target video.
17. The apparatus according to claim 16, wherein the second sending module is further configured to send an editing instruction to the cloud;
the second receiving module is further configured to receive first prompt information returned when a first unit in the editing units edited by the editing instruction input is in a locked state edited by another editing terminal;
the device, still include:
and the output module is also used for outputting the first prompt message.
18. The apparatus of claim 16 or 17, wherein the input module is further configured to
Receiving an authority processing input;
the device further comprises:
and the authority processing module is used for granting the editing authority to other editing terminals, withdrawing the editing authority or modifying the authority range of the editing authority with different authorization according to the authority processing input.
19. An electronic device, characterized in that the electronic device comprises at least: a processor and a memory for storing executable instructions operable on the processor, wherein:
the processor is configured to execute the executable instructions, and the executable instructions perform the steps of the online collaborative video editing method provided by any one of claims 1 to 6 or 7 to 9.
20. A non-transitory computer-readable storage medium having stored thereon computer-executable instructions that, when executed by a processor, implement the steps in the online collaborative video editing method provided in any one of claims 1 to 6 or 7 to 9.
CN202010158728.2A 2020-03-09 2020-03-09 Online collaborative video editing method and device Pending CN111277905A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010158728.2A CN111277905A (en) 2020-03-09 2020-03-09 Online collaborative video editing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010158728.2A CN111277905A (en) 2020-03-09 2020-03-09 Online collaborative video editing method and device

Publications (1)

Publication Number Publication Date
CN111277905A true CN111277905A (en) 2020-06-12

Family

ID=71002362

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010158728.2A Pending CN111277905A (en) 2020-03-09 2020-03-09 Online collaborative video editing method and device

Country Status (1)

Country Link
CN (1) CN111277905A (en)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111901535A (en) * 2020-07-23 2020-11-06 北京达佳互联信息技术有限公司 Video editing method, device, electronic equipment, system and storage medium
CN112218102A (en) * 2020-08-29 2021-01-12 上海量明科技发展有限公司 Video content package making method, client and system
CN112261416A (en) * 2020-10-20 2021-01-22 广州博冠信息科技有限公司 Cloud-based video processing method and device, storage medium and electronic equipment
CN112308953A (en) * 2020-11-13 2021-02-02 深圳市前海手绘科技文化有限公司 Animation video collaborative editing method and device
CN112651720A (en) * 2021-01-04 2021-04-13 中国铁道科学研究院集团有限公司电子计算技术研究所 Multi-user collaborative editing method and device of railway BIM system based on web real-time modeling
CN112839245A (en) * 2021-01-29 2021-05-25 杭州小影创新科技股份有限公司 Video engineering sharing method based on two-dimensional code technology
CN113099130A (en) * 2021-04-15 2021-07-09 北京字节跳动网络技术有限公司 Collaborative video processing method and device, electronic equipment and storage medium
CN113312911A (en) * 2021-05-26 2021-08-27 上海晏鼠计算机技术股份有限公司 Automatic authorization and intelligent text segment creation method based on outline
CN113709575A (en) * 2021-04-07 2021-11-26 腾讯科技(深圳)有限公司 Video editing processing method and device, electronic equipment and storage medium
CN114025215A (en) * 2021-11-04 2022-02-08 深圳传音控股股份有限公司 File processing method, mobile terminal and storage medium
CN118075409A (en) * 2024-04-19 2024-05-24 贵州联广科技股份有限公司 Data fusion method and device for multi-level user terminal

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1787443A (en) * 2005-12-02 2006-06-14 无锡永中科技有限公司 Method for realizing file coordination processing
CN101454774A (en) * 2006-03-31 2009-06-10 谷歌公司 Collaborative online spreadsheet application
US20130132859A1 (en) * 2011-11-18 2013-05-23 Institute For Information Industry Method and electronic device for collaborative editing by plurality of mobile devices
US20140047035A1 (en) * 2012-08-07 2014-02-13 Quanta Computer Inc. Distributing collaborative computer editing system
CN103680559A (en) * 2012-09-19 2014-03-26 新奥特(北京)视频技术有限公司 Method based on time line segment collaborative package
CN104615586A (en) * 2015-01-21 2015-05-13 上海理工大学 Real-time cooperative editing system
CN104717239A (en) * 2013-12-12 2015-06-17 鸿合科技有限公司 Method of cooperatively editing shared file, server and user side
CN104991886A (en) * 2015-07-22 2015-10-21 网易(杭州)网络有限公司 Data table editing method, apparatus and system
CN105743973A (en) * 2016-01-22 2016-07-06 上海科牛信息科技有限公司 Multi-user multi-device real-time synchronous cloud cooperation method and system
US20170272444A1 (en) * 2016-03-21 2017-09-21 Alfresco Software, Inc. Management of collaborative content item modification
CN110166652A (en) * 2019-05-28 2019-08-23 成都依能科技股份有限公司 Multi-track audio-visual synchronization edit methods
CN110795252A (en) * 2019-09-20 2020-02-14 北京浪潮数据技术有限公司 Method, device, equipment and storage medium for multi-user serial editing of file

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1787443A (en) * 2005-12-02 2006-06-14 无锡永中科技有限公司 Method for realizing file coordination processing
CN101454774A (en) * 2006-03-31 2009-06-10 谷歌公司 Collaborative online spreadsheet application
US20130132859A1 (en) * 2011-11-18 2013-05-23 Institute For Information Industry Method and electronic device for collaborative editing by plurality of mobile devices
US20140047035A1 (en) * 2012-08-07 2014-02-13 Quanta Computer Inc. Distributing collaborative computer editing system
CN103680559A (en) * 2012-09-19 2014-03-26 新奥特(北京)视频技术有限公司 Method based on time line segment collaborative package
CN104717239A (en) * 2013-12-12 2015-06-17 鸿合科技有限公司 Method of cooperatively editing shared file, server and user side
CN104615586A (en) * 2015-01-21 2015-05-13 上海理工大学 Real-time cooperative editing system
CN104991886A (en) * 2015-07-22 2015-10-21 网易(杭州)网络有限公司 Data table editing method, apparatus and system
CN105743973A (en) * 2016-01-22 2016-07-06 上海科牛信息科技有限公司 Multi-user multi-device real-time synchronous cloud cooperation method and system
US20170272444A1 (en) * 2016-03-21 2017-09-21 Alfresco Software, Inc. Management of collaborative content item modification
CN110166652A (en) * 2019-05-28 2019-08-23 成都依能科技股份有限公司 Multi-track audio-visual synchronization edit methods
CN110795252A (en) * 2019-09-20 2020-02-14 北京浪潮数据技术有限公司 Method, device, equipment and storage medium for multi-user serial editing of file

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111901535A (en) * 2020-07-23 2020-11-06 北京达佳互联信息技术有限公司 Video editing method, device, electronic equipment, system and storage medium
CN112218102A (en) * 2020-08-29 2021-01-12 上海量明科技发展有限公司 Video content package making method, client and system
CN112218102B (en) * 2020-08-29 2024-01-26 上海量明科技发展有限公司 Video content package making method, client and system
CN112261416A (en) * 2020-10-20 2021-01-22 广州博冠信息科技有限公司 Cloud-based video processing method and device, storage medium and electronic equipment
WO2022100092A1 (en) * 2020-11-13 2022-05-19 深圳市前海手绘科技文化有限公司 Animation video collaborative editing method and apparatus
CN112308953A (en) * 2020-11-13 2021-02-02 深圳市前海手绘科技文化有限公司 Animation video collaborative editing method and device
CN112651720A (en) * 2021-01-04 2021-04-13 中国铁道科学研究院集团有限公司电子计算技术研究所 Multi-user collaborative editing method and device of railway BIM system based on web real-time modeling
CN112839245A (en) * 2021-01-29 2021-05-25 杭州小影创新科技股份有限公司 Video engineering sharing method based on two-dimensional code technology
CN113709575B (en) * 2021-04-07 2024-04-16 腾讯科技(深圳)有限公司 Video editing processing method and device, electronic equipment and storage medium
CN113709575A (en) * 2021-04-07 2021-11-26 腾讯科技(深圳)有限公司 Video editing processing method and device, electronic equipment and storage medium
CN113099130A (en) * 2021-04-15 2021-07-09 北京字节跳动网络技术有限公司 Collaborative video processing method and device, electronic equipment and storage medium
CN113312911B (en) * 2021-05-26 2022-07-12 上海晏鼠计算机技术股份有限公司 Automatic authorization and intelligent text segment creation method based on outline
CN113312911A (en) * 2021-05-26 2021-08-27 上海晏鼠计算机技术股份有限公司 Automatic authorization and intelligent text segment creation method based on outline
CN114025215A (en) * 2021-11-04 2022-02-08 深圳传音控股股份有限公司 File processing method, mobile terminal and storage medium
CN118075409A (en) * 2024-04-19 2024-05-24 贵州联广科技股份有限公司 Data fusion method and device for multi-level user terminal

Similar Documents

Publication Publication Date Title
CN111277905A (en) Online collaborative video editing method and device
WO2018210137A1 (en) Method for processing message in group session, storage medium, and computer device
CN111031332B (en) Data interaction method, device, server and storage medium
CN106209800B (en) Equipment Authority sharing method and apparatus
CN112073289B (en) Instant messaging control method and device
CN104601441A (en) Authority control method for group chat and instant messaging client
CN110737844B (en) Data recommendation method and device, terminal equipment and storage medium
CN105354489A (en) Right granting method and apparatus
CN109388620A (en) A kind of method and the first electronic equipment of striding equipment access data
CN106203167A (en) Application rights management method and device
CN113360226B (en) Data content processing method, device, terminal and storage medium
US20230017859A1 (en) Meeting control method and apparatus, device, and medium
CN113988021A (en) Content interaction method and device, electronic equipment and storage medium
CN105577523A (en) Message sending methods and apparatuses
CN114237454A (en) Project display method and device, electronic equipment, storage medium and product
CN105681261A (en) Security authentication method and apparatus
WO2024093815A1 (en) Data sharing method and apparatus, electronic device, and medium
CN106919679B (en) Log replay method, device and terminal applied to distributed file system
CN117193944A (en) Application running environment generation method and device, server and storage device
CN115904296B (en) Double-record screen-throwing signing service system
CN116595957A (en) Report construction page providing method, collaborative editing method and electronic equipment
CN109542644B (en) Application programming interface calling method and device
CN114221788B (en) Login method, login device, electronic equipment and storage medium
CN111723353A (en) Identity authentication method, device, terminal and storage medium based on face recognition
CN106528197B (en) Shooting method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20200612

RJ01 Rejection of invention patent application after publication