CN112533061A - Method and equipment for collaboratively shooting and editing video - Google Patents

Method and equipment for collaboratively shooting and editing video Download PDF

Info

Publication number
CN112533061A
CN112533061A CN202011371359.1A CN202011371359A CN112533061A CN 112533061 A CN112533061 A CN 112533061A CN 202011371359 A CN202011371359 A CN 202011371359A CN 112533061 A CN112533061 A CN 112533061A
Authority
CN
China
Prior art keywords
user
video
information
shooting
video clip
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011371359.1A
Other languages
Chinese (zh)
Other versions
CN112533061B (en
Inventor
钟名铖
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Yijiao Wenshu Technology Co ltd
Original Assignee
Beijing Yijiao Wenshu Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Yijiao Wenshu Technology Co ltd filed Critical Beijing Yijiao Wenshu Technology Co ltd
Priority to CN202011371359.1A priority Critical patent/CN112533061B/en
Publication of CN112533061A publication Critical patent/CN112533061A/en
Application granted granted Critical
Publication of CN112533061B publication Critical patent/CN112533061B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/4402Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
    • H04N21/440245Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display the reformatting operation being performed only on part of the stream, e.g. a region of the image or a time segment
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/44016Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving splicing one content stream with another content stream, e.g. for substituting a video clip
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • H04N21/47205End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for manipulating displayed content, e.g. interacting with MPEG-4 objects, editing locally

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Databases & Information Systems (AREA)
  • Human Computer Interaction (AREA)
  • Television Signal Processing For Recording (AREA)

Abstract

The application aims to provide a method for editing videos through collaborative shooting, which comprises the following steps: responding to a collaborative video shooting initiating operation executed by a first user, and establishing a video shooting space; responding to shooting operation and/or editing operation executed by the first user in the video shooting space, and uploading at least one piece of first video clip information obtained by the first user to a video clip library corresponding to the video shooting space in network equipment; and in response to a collaborative editing operation performed by the first user and the at least one second user in the video shooting space for at least one piece of video clip information in the video clip library, generating target video information according to video clip sequence information determined from the at least one piece of video clip information. According to the video creation method and device, a plurality of users can complete one section of video through collaborative video shooting and editing, the video creation efficiency is maximized, and the interestingness of video shooting can be improved.

Description

Method and equipment for collaboratively shooting and editing video
Technical Field
The present application relates to the field of communications, and in particular, to a technique for collaboratively shooting and editing a video.
Background
With the development of the times, a video entertainment mode has become one of the main entertainment and leisure modes of people, and people can quickly, conveniently and anytime anywhere shoot videos by using a mobile terminal so as to record various situations and information which occur in the front. In the prior art, some mobile applications, such as trembling, clipping and mapping, support people to edit videos shot by people.
Disclosure of Invention
An object of the present application is to provide a method and apparatus for editing video by collaborative shooting.
According to an aspect of the present application, there is provided a method of editing a video by collaborative shooting, the method including:
responding to a collaborative video shooting initiating operation executed by a first user, and establishing a video shooting space, wherein the video shooting space comprises the first user and at least one second user;
responding to shooting operation and/or editing operation executed by the first user in the video shooting space, and uploading at least one piece of first video clip information obtained by the first user to a video clip library corresponding to the video shooting space in network equipment;
generating target video information according to video clip sequence information determined from the at least one video clip information in response to a collaborative editing operation performed by the first user and the at least one second user in the video shooting space for the at least one video clip information in the video clip library, wherein the at least one video clip information includes the at least one first video clip information.
According to an aspect of the present application, there is provided a first user equipment for collaboratively shooting an edited video, the first user equipment comprising:
responding to a collaborative video shooting initiating operation executed by a first user, and establishing a video shooting space, wherein the video shooting space comprises the first user and at least one second user;
responding to shooting operation and/or editing operation executed by the first user in the video shooting space, and uploading at least one piece of first video clip information obtained by the first user to a video clip library corresponding to the video shooting space in network equipment;
generating target video information according to video clip sequence information determined from the at least one video clip information in response to a collaborative editing operation performed by the first user and the at least one second user in the video shooting space for the at least one video clip information in the video clip library, wherein the at least one video clip information includes the at least one first video clip information.
According to an aspect of the present application, there is provided an apparatus for editing a video by collaborative shooting, wherein the apparatus includes:
a processor; and
a memory arranged to store computer executable instructions that, when executed, cause the processor to:
responding to a collaborative video shooting initiating operation executed by a first user, and establishing a video shooting space, wherein the video shooting space comprises the first user and at least one second user;
responding to shooting operation and/or editing operation executed by the first user in the video shooting space, and uploading at least one piece of first video clip information obtained by the first user to a video clip library corresponding to the video shooting space in network equipment;
generating target video information according to video clip sequence information determined from the at least one video clip information in response to a collaborative editing operation performed by the first user and the at least one second user in the video shooting space for the at least one video clip information in the video clip library, wherein the at least one video clip information includes the at least one first video clip information.
According to one aspect of the application, there is provided a computer-readable medium storing instructions that, when executed, cause a system to:
responding to a collaborative video shooting initiating operation executed by a first user, and establishing a video shooting space, wherein the video shooting space comprises the first user and at least one second user;
responding to shooting operation and/or editing operation executed by the first user in the video shooting space, and uploading at least one piece of first video clip information obtained by the first user to a video clip library corresponding to the video shooting space in network equipment;
generating target video information according to video clip sequence information determined from the at least one video clip information in response to a collaborative editing operation performed by the first user and the at least one second user in the video shooting space for the at least one video clip information in the video clip library, wherein the at least one video clip information includes the at least one first video clip information.
Compared with the prior art, the method and the device have the advantages that the operation can be initiated in response to the collaborative video shooting performed by the first user, the video shooting space is established, each user in the video shooting space can upload the video clip information obtained by shooting and editing to the corresponding video clip library, and then the target video information can be generated by performing the collaborative editing operation on at least one video clip information in the video clip library among the users, so that a plurality of users can complete one video clip jointly through the collaborative video shooting and editing, the video creation efficiency is maximized, and the interestingness of the video shooting can be improved.
Drawings
Other features, objects and advantages of the present application will become more apparent upon reading of the following detailed description of non-limiting embodiments thereof, made with reference to the accompanying drawings in which:
FIG. 1 illustrates a flow diagram of a method for collaborative capture editing of a video according to one embodiment of the present application;
FIG. 2 illustrates a first user equipment structure diagram for collaborative capture of an edited video, according to an embodiment of the present application;
FIG. 3 illustrates an exemplary system that can be used to implement the various embodiments described in this application.
The same or similar reference numbers in the drawings identify the same or similar elements.
Detailed Description
The present application is described in further detail below with reference to the attached figures.
In a typical configuration of the present application, the terminal, the device serving the network, and the trusted party each include one or more processors (e.g., Central Processing Units (CPUs)), input/output interfaces, network interfaces, and memory.
The Memory may include forms of volatile Memory, Random Access Memory (RAM), and/or non-volatile Memory in a computer-readable medium, such as Read Only Memory (ROM) or Flash Memory. Memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, Phase-Change Memory (PCM), Programmable Random Access Memory (PRAM), Static Random-Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), electrically Erasable Programmable Read-Only Memory (EEPROM), flash Memory or other Memory technology, Compact Disc Read-Only Memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other non-transmission medium, which can be used to store information that can be accessed by a computing device.
The device referred to in this application includes, but is not limited to, a user device, a network device, or a device formed by integrating a user device and a network device through a network. The user equipment includes, but is not limited to, any mobile electronic product, such as a smart phone, a tablet computer, etc., capable of performing human-computer interaction with a user (e.g., human-computer interaction through a touch panel), and the mobile electronic product may employ any operating system, such as an Android operating system, an iOS operating system, etc. The network Device includes an electronic Device capable of automatically performing numerical calculation and information processing according to a preset or stored instruction, and the hardware includes, but is not limited to, a microprocessor, an Application Specific Integrated Circuit (ASIC), a Programmable Logic Device (PLD), a Field Programmable Gate Array (FPGA), a Digital Signal Processor (DSP), an embedded Device, and the like. The network device includes but is not limited to a computer, a network host, a single network server, a plurality of network server sets or a cloud of a plurality of servers; here, the Cloud is composed of a large number of computers or web servers based on Cloud Computing (Cloud Computing), which is a kind of distributed Computing, one virtual supercomputer consisting of a collection of loosely coupled computers. Including, but not limited to, the internet, a wide area network, a metropolitan area network, a local area network, a VPN network, a wireless Ad Hoc network (Ad Hoc network), etc. Preferably, the device may also be a program running on the user device, the network device, or a device formed by integrating the user device and the network device, the touch terminal, or the network device and the touch terminal through a network.
Of course, those skilled in the art will appreciate that the foregoing is by way of example only, and that other existing or future devices, which may be suitable for use in the present application, are also encompassed within the scope of the present application and are hereby incorporated by reference.
In the description of the present application, "a plurality" means two or more unless specifically limited otherwise.
Fig. 1 shows a flowchart of a method for collaboratively shooting an edited video according to an embodiment of the present application, the method including step S11, step S12, and step S13. In step S11, the first user device establishes a video shooting space in response to a collaborative video shooting initiation operation performed by a first user, where the video shooting space includes the first user and at least one second user; in step S12, the first user device uploads at least one piece of first video clip information obtained by the first user to a video clip library corresponding to the video shooting space in the network device in response to a shooting operation and/or an editing operation performed by the first user in the video shooting space; in step S13, the first user equipment generates target video information according to video clip sequence information determined from at least one piece of video clip information in the video clip library in response to a collaborative editing operation performed by the first user and the at least one second user in the video shooting space for the at least one piece of video clip information in the video clip library, wherein the at least one piece of video clip information includes the at least one piece of first video clip information.
In step S11, the first user device establishes a video shooting space in response to a collaborative video shooting initiation operation performed by a first user, where the video shooting space includes the first user and at least one second user. In some embodiments, the video shooting space is a space where a first user and at least one second user perform video collaborative shooting editing and finally generate a target video, where the first user is an initiator of the video collaborative shooting editing and is also a creator of the video shooting space, and may invite the at least one second user to enter the video shooting space to participate in the video collaborative shooting editing. In some embodiments, in response to a collaborative video capturing initiation operation performed by a first user, a video capturing space is established according to identification information of at least one second user invited by the first user in the collaborative video capturing initiation operation, where the video capturing space includes the first user and the at least one second user. In some embodiments, in response to a collaborative video shooting initiating operation performed by a first user, generating a collaborative video shooting instruction, and sending the collaborative video shooting instruction to a network device, wherein the collaborative video shooting instruction includes identification information of one or more second users invited by the first user in the collaborative video shooting initiating operation; then receiving video shooting space establishment information returned by the network equipment, wherein the video shooting space establishment information comprises identification information of at least one second user in one or more second users; and then establishing a video shooting space according to the video shooting space establishment information, wherein the video shooting space comprises a first user and at least one second user. In some embodiments, in response to a collaborative video shooting instruction initiated by a first user, a video shooting space is established, the video shooting space includes the first user, and then in response to a user invitation operation performed by the first user for the video shooting space, at least one second user invited by the first user in the user invitation operation is joined to the video shooting space. In some embodiments, a video shooting space is established in response to a collaborative video shooting instruction initiated by a first user, the video shooting space includes the first user, then user invitation request information is generated in response to a user invitation operation performed by the first user for the video shooting space, and the user invitation request information is sent to a network device, wherein the user invitation request information includes identification information of one or more second users invited by the first user in the user invitation operation; then receiving user invitation feedback information returned by the network equipment, wherein the user invitation feedback information comprises identification information of at least one second user in the one or more second users; at least one second user is then added to the video capture space.
In step S12, the first user device uploads at least one piece of first video clip information obtained by the first user to a video clip library corresponding to the video shooting space in the network device in response to the shooting operation and/or the editing operation performed by the first user in the video shooting space. In some embodiments, the shooting operation may be to obtain the first video clip through shooting by a camera on the first user device, or may also be to select the first video clip locally (e.g., in an album) on the first user device, and the editing operation may be to obtain the first video clip after editing the video clip obtained through shooting by the first user device, or may also be to obtain the first video clip after editing the video clip selected from the first user device. In some embodiments, the first video clip obtained by the first user is uploaded to a video clip library corresponding to a video shooting space in the network device, and the video clip library corresponding to each video shooting space is independent from each other. In some embodiments, the video editing operations include, but are not limited to, filtering, speed doubling, beautification, and the like. In some embodiments, in addition to the first user, each second user in the video capturing space may upload the first video clip obtained by the capturing and editing to the video clip library, or only the second user in the video capturing space having the video capturing and editing authority may upload the first video clip obtained by the capturing and editing to the video clip library. In some embodiments, the video capture editing authority of each second user in the video capture space may be specified by the first user or may be set by default by the network device.
In step S13, the first user equipment generates target video information according to video clip sequence information determined from at least one piece of video clip information in the video clip library in response to a collaborative editing operation performed by the first user and the at least one second user in the video shooting space for the at least one piece of video clip information in the video clip library, wherein the at least one piece of video clip information includes the at least one piece of first video clip information. In some embodiments, the identification information of at least one video clip in the video clip library (e.g., the name of the video clip, the ID of the video clip, the cover picture of the video clip, the first frame screenshot of the video clip, etc.) is presented in the video shooting space, where the at least one video clip includes at least one first video clip uploaded to the video clip library after being shot and edited by a first user, or, on the basis of the at least one first video clip uploaded to the video clip library after being shot and edited by a second user. In some embodiments, the first user and the at least one second user may perform a collaborative editing operation on at least one piece of video clip information in the video clip library, and the collaborative editing operation performed locally by the first user on the first user device and the editing operation related information sent by the at least one second user and generated according to the collaborative editing operation performed by the at least one second user on the corresponding second user device may be both included, where the editing operation related information includes but is not limited to operation object information, operation content information, and operation state information of the collaborative editing operation performed locally by the at least one second user on the corresponding second user device, or may only include the collaborative editing operation performed locally by the first user on the first user device, or may only include the collaborative editing operation sent by the at least one second user, And editing operation related information generated according to the collaborative editing operation executed by the at least one second user on the corresponding second user equipment. In some embodiments, the collaborative editing operation refers to a plurality of users in the video capturing space collaboratively selecting a plurality of video clips from all video clips in a video clip library of the video capturing space, and splicing the plurality of video clips together in sequence to form one or more ordered sequences of video clips. In some embodiments, each second user in the video capturing space may perform a collaborative editing operation on at least one piece of video clip information in the video clip library, or only second users having collaborative editing rights in the video capturing space may perform a collaborative editing operation on at least one piece of video clip information in the video clip library, where all pieces of video clip information may correspond to one collaborative editing right, or different pieces of video clip information may correspond to different collaborative editing rights, respectively. In some embodiments, the collaborative editing authority of each second user in the video shooting space may be specified by the first user or may be set by default by the network device. In some embodiments, the collaborative editing operation is that the first user or the at least one second user determines one or more target video clips from at least one video clip in the video clip library and sets the position of each target video clip in the video clip sequence, for example, a certain target video clip is set before or after a certain video clip in the video clip sequence, and then a final target video is generated according to one or more ordered video clips in the video clip sequence. In some embodiments, the video clip sequence may be a video stream presented in a timeline manner, and the first user or the at least one second user may drag a certain video clip of the at least one video clip in the video clip library into a timeline currently presented in the video capture space at a time to form the video clip sequence. In some embodiments, the first video segment and the second video segment are short video segments having video durations less than or equal to a predetermined first duration threshold, and the target video is a short video having video durations less than or equal to a predetermined second duration threshold. In some embodiments, the target video information may include collaborative editing process data for the first user to collaboratively edit with at least one second user to generate the video clip sequence, or may further include editing process data for each video clip in the video clip sequence, so as to facilitate tracing and secondary modification at any time.
According to the method and the device, the operation can be initiated in response to the collaborative video shooting executed by the first user, the video shooting space is established, each user in the video shooting space can upload the video clip information obtained by shooting and editing to the corresponding video clip library, and then the target video information can be generated among the users by executing the collaborative editing operation on at least one video clip information in the video clip library, so that a plurality of users can complete a video segment jointly by the collaborative video shooting and editing, the video creation efficiency is maximized, and the interestingness of the video shooting can be improved.
In some embodiments, the at least one piece of video clip information further includes at least one piece of second video clip information obtained by the at least one second user through shooting editing. In some embodiments, the at least one video clip may further include at least one second video clip uploaded to the video clip library after being edited by at least one second user, on the basis of at least one first video clip uploaded to the video clip library after being shot and edited by the first user, or may also include only at least one second video clip uploaded to the video clip library after being shot and edited by at least one second user.
In some embodiments, the step S11 includes step S111 (not shown). In step S111, in response to a collaborative video capturing initiation operation performed by a first user, a first user device establishes a video capturing space according to identification information of at least one second user invited by the first user in the collaborative video capturing initiation operation, where the video capturing space includes the first user and the at least one second user. In some embodiments, the first user invites at least one second user in the process of initiating the collaborative video shooting, and the first user device establishes a video shooting space according to the identification information (e.g., user name, user ID) of the at least one second user, where the video shooting space directly includes the at least one second user invited by the first user. In some embodiments, other second users may also apply for joining the video shooting space by themselves, may automatically enter the video shooting space after applying for the second user, or may enter the video shooting space after the first user passes through the video shooting space after applying for the second user.
In some embodiments, the step S111 includes: the method comprises the steps that a first user device responds to collaborative video shooting initiating operation executed by a first user, generates a collaborative video shooting instruction and sends the collaborative video shooting instruction to a network device, wherein the collaborative video shooting instruction comprises identification information of one or more second users invited by the first user in the collaborative video shooting initiating operation; receiving video shooting space establishment information returned by the network equipment, wherein the video shooting space establishment information comprises identification information of at least one second user in the one or more second users; and establishing a video shooting space according to the video shooting space establishing information, wherein the video shooting space comprises the first user and at least one second user. In some embodiments, a first user invites at least one second user in the process of initiating collaborative video shooting, the first user device generates a collaborative video shooting instruction according to identification information (e.g., a user name and a user ID) of the at least one second user and sends the collaborative video shooting instruction to a network device, the network device generates corresponding user invitation information according to the collaborative video shooting instruction and sends the user invitation information to one or more second users invited by the first user, after invitation feedback information (e.g., confirmation of acceptance of the invitation or rejection of the invitation, and non-feedback after timeout equals to rejection of the invitation) of the one or more second users is received, video shooting space establishment information is generated according to the identification information of the at least one second user confirming acceptance of the invitation and sent to the first user, the first user device confirms the identification information of the at least one second user confirming acceptance of the invitation in the received video shooting space establishment information, a video capture space is established that will directly include at least one second user confirming acceptance of the invitation.
In some embodiments, the step S11 includes a step S112 (not shown) and a step S113 (not shown). In step S112, the first user equipment establishes a video shooting space in response to a collaborative video shooting instruction initiated by a first user, where the video shooting space includes the first user; in step S113, in response to a user invitation operation performed by the first user with respect to the video capturing space, the first user device joins at least one second user invited by the first user in the user invitation operation to the video capturing space. In some embodiments, after a first user initiates collaborative video shooting, a video shooting space is directly established, the video shooting space may default to include the first user, then the first user performs a user invitation operation with respect to the video shooting space to invite at least one second user, and the first user device may join the at least one second user invited by the first user directly in the video shooting space. In some embodiments, other second users may also apply for joining the video shooting space by themselves, may automatically enter the video shooting space after applying for the second user, or may enter the video shooting space after the first user passes through the video shooting space after applying for the second user.
In some embodiments, the step a2 includes: the method comprises the steps that a first user device responds to a user invitation operation executed by the first user for a video shooting space, user invitation request information is generated, and the user invitation request information is sent to a network device, wherein the user invitation request information comprises identification information of one or more second users invited by the first user in the user invitation operation, and user invitation feedback information returned by the network device is received, wherein the user invitation feedback information comprises the identification information of at least one second user in the one or more second users; joining the at least one second user to the video capture space. In some embodiments, after a first user initiates collaborative video shooting, a video shooting space is directly established, the video shooting space may default to include a first user, then the first user performs a user invitation operation for the video shooting space to invite one or more second users, the first user device generates user invitation request information according to identification information of the one or more second users and sends the user invitation request information to a network device, the network device generates invitation information respectively for each of the one or more second users according to the user invitation request information and sends the invitation information respectively to each of the one or more second users, after invitation feedback information of the one or more second users is received (for example, the invitation acceptance is confirmed or rejected, and the non-feedback rejection after timeout equals to the invitation acceptance), according to identification information of at least one second user confirming the invitation acceptance, and the first user equipment confirms the identification information of at least one second user who accepts the invitation according to the received user invitation feedback information, and joins the at least one second user who confirms the acceptance of the invitation into the video shooting space.
In some embodiments, the method further comprises: and the first user equipment establishes a real-time voice channel of the first user and the at least one second user in the video shooting space. In some embodiments, a channel for a first user and at least one second user in the video shooting space to perform real-time voice is established in the video shooting space, and after the channel is established, the first user and the at least one second user are automatically added to the channel, or the first user and the at least one second user need to manually add to the channel according to their own voice requirements. In some embodiments, other second users may also apply for joining the video shooting space by themselves, may automatically enter the video shooting space after applying for the second user, or may enter the video shooting space after the first user passes through the video shooting space after applying for the second user.
In some embodiments, if the state of the first user in the video capturing space is a video collaborative editing state; wherein the method further comprises: and the first user equipment presents at least one piece of video clip information in the video clip library on a collaborative editing interface in the video shooting space. In some embodiments, the status of the first user or each second user in the video capturing space includes, but is not limited to, a video collaborative editing status, a video capturing status, a video clip editing status for certain target video clip information, a conference status (e.g., currently performing voice communication with other users in a pre-established real-time voice channel), and the like. In some embodiments, if the state of the first user in the video capturing space is a video collaborative editing state, what is currently presented in the video capturing space is a collaborative editing interface, and identification information of at least one piece of video clip information in the video clip library (for example, a name of the video clip, an ID of the video clip, a cover page of the video clip, a first frame screenshot of the video clip, and the like) is presented on the collaborative editing interface.
In some embodiments, the method further comprises: and the first user equipment responds to a first preset trigger operation executed by the first user in the collaborative editing interface, switches the state of the first user in the video shooting space to a video shooting state, and presents a shooting interface in the video shooting space. In some embodiments, the first predetermined trigger operation may be that the first user clicks a predetermined button (for example, a "record" button) on the collaborative editing interface, and at this time, the state of the first user in the video capturing space is switched from the video collaborative editing state to the video capturing state, and jumps from the collaborative editing interface to the capturing interface. In some embodiments, when the first user enters the video shooting space, if no video clip exists in the video clip library, the collaborative editing interface is presented by default, otherwise, if at least one video clip exists in the video clip library, the shooting interface is presented by default.
In some embodiments, the step S12 includes: the first user equipment responds to video shooting operation executed by the first user on the shooting interface, and uploads at least one piece of first video clip information obtained by shooting of the first user to a video clip library corresponding to the video shooting space in network equipment. In some embodiments, the first user performs video shooting on the shooting interface, and uploads the original video clip obtained by shooting to the video clip library directly, or may also perform video editing (for example, operations such as filtering, doubling speed, beautifying, and the like) on the original video clip obtained by shooting after the video shooting is completed, and upload the edited video clip to the video clip library. In some embodiments, a relay shooting mode may be used, in which a first user takes turns with each of a part of at least one second user to shoot one video segment, or a dominant mode may be used, in which all video segments are shot by the first user or one of the at least one second user, or a synchronous shooting mode may be used, in which a plurality of video segments are shot by the first user and a part of at least one second user at the same time, current shooting interfaces of the first user and one or more second users performing shooting operations at the same time may be presented in the video shooting space in real time, a default presentation of the current shooting interface of one of the users may be designated, and the first user may switch to present current shooting interfaces of other users through a predetermined switching operation (e.g., gesture sliding, etc.), or, current capture interfaces of multiple users may also be presented simultaneously in the video capture space.
In some embodiments, if one or more targeted ones of the at least one second user have entered the capture interface: wherein the method further comprises: and the first user equipment sends the current shooting picture of the shooting interface to the one or more target second users in real time so as to present the current shooting picture in real time in the video shooting space of the second user equipment of the one or more target second users. In some embodiments, one or more target second users of the at least one second user may select to enter the shooting interface of the first user, and at this time, a current shooting picture of the shooting interface of the first user is synchronized to the one or more target second users in real time. In some embodiments, the first user may communicate with one or more target second users via a previously established real-time voice channel, which may also be temporarily masked to focus on video clip recording. In some embodiments, the first user may also communicate with one or more targeted second users via a newly established additional real-time voice channel, as distinguished from the previously established real-time voice channel.
In some embodiments, the method further comprises: and the first user equipment presents the identification information of the one or more target second users on the shooting interface. In some embodiments, the first user may see in the camera interface which target second users are currently watching their recordings, and the identification information of the target second users includes, but is not limited to, user name, user ID, user avatar, and the like.
In some embodiments, the method further comprises: and the first user equipment responds to a second preset triggering operation executed by the first user on the collaborative editing interface aiming at the target video clip information in the at least one piece of video clip information, switches the state of the first user in the video shooting space to a video clip editing state aiming at the target video clip information, and presents the editing interface of the target video clip information in the video shooting space. In some embodiments, the second predetermined trigger operation may be that the first user clicks the identification information of a certain target video clip in the identification information of at least one piece of video clip information currently presented on the collaborative editing interface, at this time, the state of the first user in the video capturing space is switched from the video collaborative editing state to the video clip editing state for the target video clip, and a jump is made from the collaborative editing interface to the editing interface of the target video clip.
In some embodiments, the step S12 includes: and the first user equipment responds to the video editing operation executed by the first user on the editing interface aiming at the target video clip information, and uploads the edited target video clip information to the video clip library. In some embodiments, the first user may perform video editing (e.g., operations such as filtering, doubling, beautifying, etc.) on the first video segment taken by the first user at the editing interface, and may also perform video editing on the second video segment taken by at least one second user, and then upload the edited video segment to the video segment library to update the video segment before editing. In some embodiments, the first user may only video edit the first video segment that he or she shot at the editing interface.
In some embodiments, if one or more target second users of the at least one second user have entered the editing interface, the video editing operation includes one or more video editing action information; wherein the method further comprises: the first user equipment sends each piece of video editing action information in the one or more pieces of video editing action information to the one or more target second users in real time, so that the target video clip information and the one or more pieces of video editing action information are presented in real time in a video shooting space of the second user equipment of the one or more target second users. In some embodiments, one or more target second users of the at least one second user may select to enter the editing interface of the target video clip of the first user, and at this time, each piece of video editing action information of the one or more pieces of video editing action information corresponding to the video editing operation performed by the first user on the target video clip in the editing interface is sent to the one or more target second users in real time, so that the target video clip and the one or more pieces of video editing action information performed on the target video clip are presented in real time in the video capturing space of the second user equipment of the one or more target second users, and the video editing operation currently performed by the first user on the target video clip is synchronized in real time in the video capturing space of the second user equipment. In some embodiments, the video editing action information includes, but is not limited to, operation object information, operation content information, and operation state information of the video editing action. In some embodiments, the first user may communicate with one or more targeted second users via a previously established real-time voice channel, which may also be temporarily masked to focus on video clip editing. In some embodiments, the first user may also communicate with one or more targeted second users via a newly established additional real-time voice channel, as distinguished from the previously established real-time voice channel.
In some embodiments, the method further comprises: and the first user equipment presents the identification information of the one or more target second users on the editing interface. In some embodiments, the first user may see in the editing interface which target second users are currently viewing the edits, and the identification information of the target second users includes, but is not limited to, user name, user ID, user avatar, and the like.
In some embodiments, the method further comprises: and the first user equipment presents the identification information of each second user in the at least one second user and the state information of each second user in the video shooting space on the collaborative editing interface. In some embodiments, the identification information of each second user includes, but is not limited to, a user name, a user ID, a user avatar, and the like. In some embodiments, the status of each second user in the video capturing space includes, but is not limited to, a video collaborative editing status, a video capturing status, a video clip editing status for certain target video clip information, a conference status (e.g., currently performing voice communication with other users in a pre-established real-time voice channel), and the like. In some embodiments, clicking on the identification information of a second user may view the identification information of one or more video clips that are completely captured in the video capture space by the second user (e.g., the name of a video clip, the ID of a video clip, a cover image of a video clip, a first frame screenshot of a video clip, etc.) or edit the identification information of one or more processed video clips, and clicking on the identification information of a video clip by the first user may enter a play page of the video clip to start playing the video clip.
In some embodiments, in response to a third predetermined trigger operation performed by the first user with respect to the identification information of a target second user of the at least one second user, if the state of the target second user in the video shooting space is a video shooting state, receiving and presenting a current shooting picture of the target second user in real time; and if the state of the target second user in the shooting space is a video clip editing state aiming at the target video clip information, presenting the target video clip information, and receiving and presenting one or more pieces of video editing action information executed by the target second user aiming at the target video clip information in real time. In some embodiments, the third predetermined trigger operation may be that the first user clicks the identification information of the target second user in the identification information of at least one second user currently presented on the collaborative editing interface, at this time, a current state of the target second user in the video shooting space is detected, if the current state of the target second user is a video shooting state, the video shooting interface of the target second user is entered at this time, and a current shooting picture of the target second user sent by the target second user is received and presented in the interface in real time, if the current state of the target second user is a video clip editing state for a certain target video clip, the video editing interface of the target video clip of the target second user is entered at this time, and one or more pieces of video editing action information sent by the target second user and executed by the target second user for the target video clip information are received in the interface in real time, to synchronize in real-time the video editing operation currently performed by the target second user for the target video segment.
In some embodiments, the step S13 includes: the first user equipment determines video clip sequence information formed by one or more pieces of video clip information in the at least one piece of video clip information in response to a first collaborative editing operation executed by the first user on the at least one piece of video clip information in the video clip library and/or received second collaborative editing operation related information corresponding to one or more second users in the at least one second user in the collaborative editing interface, and generates target video information according to the video clip sequence information. In some embodiments, the first user and the at least one second user may perform a collaborative editing operation on at least one piece of video clip information in the video clip library, and the collaborative editing operation performed locally by the first user on the first user device and the editing operation related information sent by the at least one second user and generated according to the collaborative editing operation performed by the at least one second user on the corresponding second user device may be both included, or only the collaborative editing operation performed locally by the first user on the first user device may be included, or only the editing operation related information sent by the at least one second user and generated according to the collaborative editing operation performed by the at least one second user on the corresponding second user device may be received. In some embodiments, the editing operation related information includes, but is not limited to, operation object information, operation content information, and operation state information of a collaborative editing operation performed by the at least one second user on its corresponding second user device, or may only include a collaborative editing operation performed locally by the first user on the first user device, or may only include editing operation related information sent by the at least one second user and generated according to the collaborative editing operation performed by the at least one second user on its corresponding second user device. In some embodiments, the editing operation related information corresponding to the first user performing the collaborative editing operation on the at least one video clip information in the video clip library is also synchronized to each of the at least one second user. In some embodiments, the collaborative editing operation is that the first user or the at least one second user determines one or more target video clips from at least one video clip in the video clip library and sets the position of each target video clip in the video clip sequence, for example, a certain target video clip is set before or after a certain video clip in the video clip sequence, and then a final target video is generated according to one or more ordered video clips in the video clip sequence. In some embodiments, the video clip sequence may be a video stream presented in a timeline manner, and the first user or the at least one second user may drag a certain video clip of the at least one video clip in the video clip library into a timeline currently presented in the video capture space at a time to form the video clip sequence.
In some embodiments, the second collaborative editing operation related information includes at least one of operation object information, operation content information, and operation state information of a second collaborative editing operation performed by each of the one or more second users with respect to the at least one piece of video clip information. In some embodiments, the operation object information includes, but is not limited to, a certain video clip, a current screen, and the like, the operation content information includes, but is not limited to, a drag process of dragging a certain video clip to a certain position in a sequence of video clips (e.g., to the front or the back of a certain video clip in a sequence of video clips), a scroll process of scrolling a current screen by several pixels, a flip process of flipping a current screen by several pages, and the like, and the operation state information includes, but is not limited to, which video clip is currently dragged and the position of the video clip in the sequence of video clips, the position to which the current screen is scrolled, the number of pages to which the current screen is flipped, and the.
In some embodiments, the determining video clip sequence information composed of the at least one piece of video clip information and one or more pieces of video clip information, and generating the target video information according to the video clip sequence information includes: and determining a plurality of pieces of video clip sequence information consisting of one or more pieces of video clip information in the at least one piece of video clip information, and generating target video information corresponding to the video clip sequence information for each piece of video clip sequence information. In some embodiments, in order to be compatible with different ideas of the first user and the at least one second user, a plurality of different video clip sequences may be obtained according to a collaborative editing operation performed by the first user or each second user, the different video clip sequences may correspond to different video clip sets, or although the different video clip sequences correspond to the same video clip set, the order of each video clip in the video clip set is different, for each video clip sequence, a target video corresponding to the video clip sequence is generated, and finally, a plurality of target videos of different versions may be generated.
In some embodiments, the generating target video information from the video segment sequence information comprises: and responding to a preset video collaborative editing end condition, and generating target video information according to the video segment sequence information. In some embodiments, the predetermined video collaborative editing end condition may be a countdown end, or may also be an end of the originator of the video collaborative shooting editing or an end of the creator of the video shooting space, that is, the first user, or may also be an end condition predetermined by any other first user and at least one second user.
In some embodiments, the generating target video information according to the video clip sequence information in response to a predetermined video collaborative editing end condition includes: and responding to the cooperative video shooting ending operation executed by the first user, and generating target video information according to the video clip sequence information. In some embodiments, the collaborative video capturing end operation may be that the first user clicks a predetermined button (for example, an "end collaborative editing" button) on the collaborative editing interface, at which time, corresponding video collaborative editing end information is generated and sent to at least one second user in the video capturing space via the network device, so as to end the video collaborative capturing and editing in the video capturing space of the second user device of each second user.
Fig. 2 shows a structure diagram of a first user equipment for collaboratively shooting an edited video according to an embodiment of the present application, which includes a one-module 11, a two-module 12, and a three-module 13. A one-to-one module 11, configured to respond to a collaborative video shooting initiation operation performed by a first user, and establish a video shooting space, where the video shooting space includes the first user and at least one second user; a second module 12, configured to respond to a shooting operation and/or an editing operation performed by the first user in the video shooting space, upload at least one piece of first video clip information obtained by the first user to a video clip library corresponding to the video shooting space in a network device; a third module 13, configured to generate target video information according to video clip sequence information determined from the at least one video clip information in response to a collaborative editing operation performed by the first user and the at least one second user in the video shooting space for the at least one video clip information in the video clip library, where the at least one video clip information includes the at least one first video clip information.
A one-to-one module 11, configured to respond to a collaborative video shooting initiation operation performed by a first user, and establish a video shooting space, where the video shooting space includes the first user and at least one second user. In some embodiments, the video shooting space is a space where a first user and at least one second user perform video collaborative shooting editing and finally generate a target video, where the first user is an initiator of the video collaborative shooting editing and is also a creator of the video shooting space, and may invite the at least one second user to enter the video shooting space to participate in the video collaborative shooting editing. In some embodiments, in response to a collaborative video capturing initiation operation performed by a first user, a video capturing space is established according to identification information of at least one second user invited by the first user in the collaborative video capturing initiation operation, where the video capturing space includes the first user and the at least one second user. In some embodiments, in response to a collaborative video shooting initiating operation performed by a first user, generating a collaborative video shooting instruction, and sending the collaborative video shooting instruction to a network device, wherein the collaborative video shooting instruction includes identification information of one or more second users invited by the first user in the collaborative video shooting initiating operation; then receiving video shooting space establishment information returned by the network equipment, wherein the video shooting space establishment information comprises identification information of at least one second user in one or more second users; and then establishing a video shooting space according to the video shooting space establishment information, wherein the video shooting space comprises a first user and at least one second user. In some embodiments, in response to a collaborative video shooting instruction initiated by a first user, a video shooting space is established, the video shooting space includes the first user, and then in response to a user invitation operation performed by the first user for the video shooting space, at least one second user invited by the first user in the user invitation operation is joined to the video shooting space. In some embodiments, a video shooting space is established in response to a collaborative video shooting instruction initiated by a first user, the video shooting space includes the first user, then user invitation request information is generated in response to a user invitation operation performed by the first user for the video shooting space, and the user invitation request information is sent to a network device, wherein the user invitation request information includes identification information of one or more second users invited by the first user in the user invitation operation; then receiving user invitation feedback information returned by the network equipment, wherein the user invitation feedback information comprises identification information of at least one second user in the one or more second users; at least one second user is then added to the video capture space.
A second module 12, configured to respond to a shooting operation and/or an editing operation performed by the first user in the video shooting space, upload at least one piece of first video clip information obtained by the first user to a video clip library corresponding to the video shooting space in a network device. In some embodiments, the shooting operation may be to obtain the first video clip through shooting by a camera on the first user device, or may also be to select the first video clip locally (e.g., in an album) on the first user device, and the editing operation may be to obtain the first video clip after editing the video clip obtained through shooting by the first user device, or may also be to obtain the first video clip after editing the video clip selected from the first user device. In some embodiments, the first video clip obtained by the first user is uploaded to a video clip library corresponding to a video shooting space in the network device, and the video clip library corresponding to each video shooting space is independent from each other. In some embodiments, the video editing operations include, but are not limited to, filtering, speed doubling, beautification, and the like. In some embodiments, in addition to the first user, each second user in the video capturing space may upload the first video clip obtained by the capturing and editing to the video clip library, or only the second user in the video capturing space having the video capturing and editing authority may upload the first video clip obtained by the capturing and editing to the video clip library. In some embodiments, the video capture editing authority of each second user in the video capture space may be specified by the first user or may be set by default by the network device.
A third module 13, configured to generate target video information according to video clip sequence information determined from the at least one video clip information in response to a collaborative editing operation performed by the first user and the at least one second user in the video shooting space for the at least one video clip information in the video clip library, where the at least one video clip information includes the at least one first video clip information and at least one second video clip information obtained by shooting and editing by the at least one second user. In some embodiments, the identification information of at least one video clip in the video clip library (e.g., the name of the video clip, the ID of the video clip, the cover picture of the video clip, the first frame screenshot of the video clip, etc.) is presented in the video shooting space, where the at least one video clip includes at least one first video clip uploaded to the video clip library after being shot and edited by a first user, or, on the basis of the at least one first video clip uploaded to the video clip library after being shot and edited by a second user. In some embodiments, the first user and the at least one second user may perform a collaborative editing operation on at least one piece of video clip information in the video clip library, and the collaborative editing operation performed locally by the first user on the first user device and the editing operation related information sent by the at least one second user and generated according to the collaborative editing operation performed by the at least one second user on the corresponding second user device may be both included, where the editing operation related information includes but is not limited to operation object information, operation content information, and operation state information of the collaborative editing operation performed locally by the at least one second user on the corresponding second user device, or may only include the collaborative editing operation performed locally by the first user on the first user device, or may only include the collaborative editing operation sent by the at least one second user, And editing operation related information generated according to the collaborative editing operation executed by the at least one second user on the corresponding second user equipment. In some embodiments, the collaborative editing operation refers to a plurality of users in the video capturing space collaboratively selecting a plurality of video clips from all video clips in a video clip library of the video capturing space, and splicing the plurality of video clips together in sequence to form one or more ordered sequences of video clips. In some embodiments, each second user in the video capturing space may perform a collaborative editing operation on at least one piece of video clip information in the video clip library, or only second users having collaborative editing rights in the video capturing space may perform a collaborative editing operation on at least one piece of video clip information in the video clip library, where all pieces of video clip information may correspond to one collaborative editing right, or different pieces of video clip information may correspond to different collaborative editing rights, respectively. In some embodiments, the collaborative editing authority of each second user in the video shooting space may be specified by the first user or may be set by default by the network device. In some embodiments, the collaborative editing operation is that the first user or the at least one second user determines one or more target video clips from at least one video clip in the video clip library and sets the position of each target video clip in the video clip sequence, for example, a certain target video clip is set before or after a certain video clip in the video clip sequence, and then a final target video is generated according to one or more ordered video clips in the video clip sequence. In some embodiments, the video clip sequence may be a video stream presented in a timeline manner, and the first user or the at least one second user may drag a certain video clip of the at least one video clip in the video clip library into a timeline currently presented in the video capture space at a time to form the video clip sequence. In some embodiments, the first video segment and the second video segment are short video segments having video durations less than or equal to a predetermined first duration threshold, and the target video is a short video having video durations less than or equal to a predetermined second duration threshold. In some embodiments, the target video information may include collaborative editing process data for the first user to collaboratively edit with at least one second user to generate the video clip sequence, or may further include editing process data for each video clip in the video clip sequence, so as to facilitate tracing and secondary modification at any time.
According to the method and the device, the operation can be initiated in response to the collaborative video shooting executed by the first user, the video shooting space is established, each user in the video shooting space can upload the video clip information obtained by shooting and editing to the corresponding video clip library, and then the target video information can be generated among the users by executing the collaborative editing operation on at least one video clip information in the video clip library, so that a plurality of users can complete a video segment jointly by the collaborative video shooting and editing, the video creation efficiency is maximized, and the interestingness of the video shooting can be improved.
In some embodiments, the at least one piece of video clip information further includes at least one piece of second video clip information obtained by the at least one second user through shooting editing. Here, the related operations are the same as or similar to those of the embodiment shown in fig. 1, and therefore are not described again, and are included herein by reference.
In some embodiments, the one-to-one module 11 includes one-to-one module 111 (not shown). A one-to-one module 111, configured to respond to a collaborative video shooting initiating operation performed by a first user, and establish a video shooting space according to identification information of at least one second user invited by the first user in the collaborative video shooting initiating operation, where the video shooting space includes the first user and the at least one second user. Here, the specific implementation manner of one-to-one module 111 is the same as or similar to the embodiment related to step S111 in fig. 1, and therefore, the detailed description is omitted, and the detailed description is incorporated herein by reference.
In some embodiments, the one-to-one module 111 is configured to: responding to a collaborative video shooting initiating operation executed by a first user, generating a collaborative video shooting instruction, and sending the collaborative video shooting instruction to network equipment, wherein the collaborative video shooting instruction comprises identification information of one or more second users invited by the first user in the collaborative video shooting initiating operation; receiving video shooting space establishment information returned by the network equipment, wherein the video shooting space establishment information comprises identification information of at least one second user in the one or more second users; and establishing a video shooting space according to the video shooting space establishing information, wherein the video shooting space comprises the first user and at least one second user. Here, the related operations are the same as or similar to those of the embodiment shown in fig. 1, and therefore are not described again, and are included herein by reference.
In some embodiments, the one-to-one module 11 includes a one-to-two module 112 (not shown) and a one-to-three module 113 (not shown). A one-to-two module 112, configured to respond to a collaborative video shooting instruction initiated by a first user, and establish a video shooting space, where the video shooting space includes the first user; a module 113, configured to respond to a user invitation operation performed by the first user with respect to the video capturing space, join at least one second user invited by the first user in the user invitation operation to the video capturing space. Here, the specific implementation manners of the one-to-two module 112 and the one-to-three module 113 are the same as or similar to the embodiments related to steps S112 and S113 in fig. 1, and therefore, the detailed descriptions are omitted, and the detailed descriptions are incorporated herein by reference.
In some embodiments, the one-to-three module 113 is configured to: responding to a user invitation operation executed by the first user aiming at the video shooting space, generating user invitation request information, and sending the user invitation request information to network equipment, wherein the user invitation request information comprises identification information of one or more second users invited by the first user in the user invitation operation, and user invitation feedback information returned by the network equipment is received, wherein the user invitation feedback information comprises the identification information of at least one second user in the one or more second users; joining the at least one second user to the video capture space. Here, the related operations are the same as or similar to those of the embodiment shown in fig. 1, and therefore are not described again, and are included herein by reference.
In some embodiments, the apparatus is further configured to: and establishing a real-time voice channel of the first user and the at least one second user in the video shooting space. Here, the related operations are the same as or similar to those of the embodiment shown in fig. 1, and therefore are not described again, and are included herein by reference.
In some embodiments, if the state of the first user in the video capturing space is a video collaborative editing state; wherein the device is further configured to: and presenting at least one piece of video clip information in the video clip library on a collaborative editing interface in the video shooting space. Here, the related operations are the same as or similar to those of the embodiment shown in fig. 1, and therefore are not described again, and are included herein by reference.
In some embodiments, the apparatus is further configured to: and responding to a first preset trigger operation executed by the first user in the collaborative editing interface, switching the state of the first user in the video shooting space to a video shooting state, and presenting a shooting interface in the video shooting space. Here, the related operations are the same as or similar to those of the embodiment shown in fig. 1, and therefore are not described again, and are included herein by reference.
In some embodiments, the secondary module 12 is configured to: responding to a video shooting operation executed by the first user on the shooting interface, and uploading at least one piece of first video clip information obtained by shooting of the first user to a video clip library corresponding to the video shooting space in network equipment. Here, the related operations are the same as or similar to those of the embodiment shown in fig. 1, and therefore are not described again, and are included herein by reference.
In some embodiments, if one or more targeted ones of the at least one second user have entered the capture interface: wherein the method further comprises: and the first user equipment sends the current shooting picture of the shooting interface to the one or more target second users in real time so as to present the current shooting picture in real time in the video shooting space of the second user equipment of the one or more target second users. Here, the related operations are the same as or similar to those of the embodiment shown in fig. 1, and therefore are not described again, and are included herein by reference.
In some embodiments, the apparatus is further configured to: and the first user equipment presents the identification information of the one or more target second users on the shooting interface. The first user is in the shooting interface
In some embodiments, the apparatus is further configured to: in response to a second preset triggering operation executed by the first user on the collaborative editing interface for target video clip information in the at least one piece of video clip information, switching the state of the first user in the video shooting space to a video clip editing state for the target video clip information, and presenting the editing interface of the target video clip information in the video shooting space. Here, the related operations are the same as or similar to those of the embodiment shown in fig. 1, and therefore are not described again, and are included herein by reference.
In some embodiments, the secondary module 12 is configured to: responding to the video editing operation executed by the first user on the editing interface aiming at the target video clip information, and uploading the edited target video clip information to the video clip library. Here, the related operations are the same as or similar to those of the embodiment shown in fig. 1, and therefore are not described again, and are included herein by reference.
In some embodiments, if one or more target second users of the at least one second user have entered the editing interface, the video editing operation includes one or more video editing action information; wherein the device is further configured to: and sending each piece of video editing action information in the one or more pieces of video editing action information to the one or more target second users in real time, so as to present the target video clip information and the one or more pieces of video editing action information in real time in a video shooting space of second user equipment of the one or more target second users. Here, the related operations are the same as or similar to those of the embodiment shown in fig. 1, and therefore are not described again, and are included herein by reference.
In some embodiments, the apparatus is further configured to: and presenting the identification information of the one or more target second users on the editing interface. Here, the related operations are the same as or similar to those of the embodiment shown in fig. 1, and therefore are not described again, and are included herein by reference.
In some embodiments, the apparatus is further configured to: and presenting the identification information of each second user in the at least one second user and the state information of each second user in the video shooting space on the collaborative editing interface. Here, the related operations are the same as or similar to those of the embodiment shown in fig. 1, and therefore are not described again, and are included herein by reference.
In some embodiments, in response to a third predetermined trigger operation performed by the first user with respect to the identification information of a target second user of the at least one second user, if the state of the target second user in the video shooting space is a video shooting state, receiving and presenting a current shooting picture of the target second user in real time; and if the state of the target second user in the shooting space is a video clip editing state aiming at the target video clip information, presenting the target video clip information, and receiving and presenting one or more pieces of video editing action information executed by the target second user aiming at the target video clip information in real time. Here, the related operations are the same as or similar to those of the embodiment shown in fig. 1, and therefore are not described again, and are included herein by reference.
In some embodiments, the one-three module 13 is configured to: in response to a first collaborative editing operation executed by the first user on the collaborative editing interface for at least one piece of video clip information in the video clip library and/or received second collaborative editing operation related information corresponding to one or more second users of the at least one second user, determining video clip sequence information composed of one or more pieces of video clip information in the at least one piece of video clip information, and generating target video information according to the video clip sequence information. Here, the related operations are the same as or similar to those of the embodiment shown in fig. 1, and therefore are not described again, and are included herein by reference.
In some embodiments, the second collaborative editing operation related information includes at least one of operation object information, operation content information, and operation state information of a second collaborative editing operation performed by each of the one or more second users with respect to the at least one piece of video clip information. Here, the related operations are the same as or similar to those of the embodiment shown in fig. 1, and therefore are not described again, and are included herein by reference.
In some embodiments, the determining video clip sequence information composed of the at least one piece of video clip information and one or more pieces of video clip information, and generating the target video information according to the video clip sequence information includes: and determining a plurality of pieces of video clip sequence information consisting of one or more pieces of video clip information in the at least one piece of video clip information, and generating target video information corresponding to the video clip sequence information for each piece of video clip sequence information. Here, the related operations are the same as or similar to those of the embodiment shown in fig. 1, and therefore are not described again, and are included herein by reference.
In some embodiments, the generating target video information from the video segment sequence information comprises: and responding to a preset video collaborative editing end condition, and generating target video information according to the video segment sequence information. Here, the related operations are the same as or similar to those of the embodiment shown in fig. 1, and therefore are not described again, and are included herein by reference.
In some embodiments, the generating target video information according to the video clip sequence information in response to a predetermined video collaborative editing end condition includes: and responding to the cooperative video shooting ending operation executed by the first user, and generating target video information according to the video clip sequence information. Here, the related operations are the same as or similar to those of the embodiment shown in fig. 1, and therefore are not described again, and are included herein by reference.
FIG. 3 illustrates an exemplary system that can be used to implement the various embodiments described in this application.
In some embodiments, as shown in FIG. 3, the system 300 can be implemented as any of the devices in the various embodiments described. In some embodiments, system 300 may include one or more computer-readable media (e.g., system memory or NVM/storage 320) having instructions and one or more processors (e.g., processor(s) 305) coupled with the one or more computer-readable media and configured to execute the instructions to implement modules to perform the actions described herein.
For one embodiment, system control module 310 may include any suitable interface controllers to provide any suitable interface to at least one of processor(s) 305 and/or any suitable device or component in communication with system control module 310.
The system control module 310 may include a memory controller module 330 to provide an interface to the system memory 315. Memory controller module 330 may be a hardware module, a software module, and/or a firmware module.
System memory 315 may be used, for example, to load and store data and/or instructions for system 300. For one embodiment, system memory 315 may include any suitable volatile memory, such as suitable DRAM. In some embodiments, the system memory 315 may include a double data rate type four synchronous dynamic random access memory (DDR4 SDRAM).
For one embodiment, system control module 310 may include one or more input/output (I/O) controllers to provide an interface to NVM/storage 320 and communication interface(s) 325.
For example, NVM/storage 320 may be used to store data and/or instructions. NVM/storage 320 may include any suitable non-volatile memory (e.g., flash memory) and/or may include any suitable non-volatile storage device(s) (e.g., one or more Hard Disk Drives (HDDs), one or more Compact Disc (CD) drives, and/or one or more Digital Versatile Disc (DVD) drives).
NVM/storage 320 may include storage resources that are physically part of the device on which system 300 is installed or may be accessed by the device and not necessarily part of the device. For example, NVM/storage 320 may be accessible over a network via communication interface(s) 325.
Communication interface(s) 325 may provide an interface for system 300 to communicate over one or more networks and/or with any other suitable device. System 300 may wirelessly communicate with one or more components of a wireless network according to any of one or more wireless network standards and/or protocols.
For one embodiment, at least one of the processor(s) 305 may be packaged together with logic for one or more controller(s) (e.g., memory controller module 330) of the system control module 310. For one embodiment, at least one of the processor(s) 305 may be packaged together with logic for one or more controller(s) of the system control module 310 to form a System In Package (SiP). For one embodiment, at least one of the processor(s) 305 may be integrated on the same die with logic for one or more controller(s) of the system control module 310. For one embodiment, at least one of the processor(s) 305 may be integrated on the same die with logic for one or more controller(s) of the system control module 310 to form a system on a chip (SoC).
In various embodiments, system 300 may be, but is not limited to being: a server, a workstation, a desktop computing device, or a mobile computing device (e.g., a laptop computing device, a holding computing device, a tablet, a netbook, etc.). In various embodiments, system 300 may have more or fewer components and/or different architectures. For example, in some embodiments, system 300 includes one or more cameras, a keyboard, a Liquid Crystal Display (LCD) screen (including a touch screen display), a non-volatile memory port, multiple antennas, a graphics chip, an Application Specific Integrated Circuit (ASIC), and speakers.
The present application also provides a computer readable storage medium having stored thereon computer code which, when executed, performs a method as in any one of the preceding.
The present application also provides a computer program product, which when executed by a computer device, performs the method of any of the preceding claims.
The present application further provides a computer device, comprising:
one or more processors;
a memory for storing one or more computer programs;
the one or more computer programs, when executed by the one or more processors, cause the one or more processors to implement the method of any preceding claim.
It should be noted that the present application may be implemented in software and/or a combination of software and hardware, for example, implemented using Application Specific Integrated Circuits (ASICs), general purpose computers or any other similar hardware devices. In one embodiment, the software programs of the present application may be executed by a processor to implement the steps or functions described above. Likewise, the software programs (including associated data structures) of the present application may be stored in a computer readable recording medium, such as RAM memory, magnetic or optical drive or diskette and the like. Additionally, some of the steps or functions of the present application may be implemented in hardware, for example, as circuitry that cooperates with the processor to perform various steps or functions.
In addition, some of the present application may be implemented as a computer program product, such as computer program instructions, which when executed by a computer, may invoke or provide methods and/or techniques in accordance with the present application through the operation of the computer. Those skilled in the art will appreciate that the form in which the computer program instructions reside on a computer-readable medium includes, but is not limited to, source files, executable files, installation package files, and the like, and that the manner in which the computer program instructions are executed by a computer includes, but is not limited to: the computer directly executes the instruction, or the computer compiles the instruction and then executes the corresponding compiled program, or the computer reads and executes the instruction, or the computer reads and installs the instruction and then executes the corresponding installed program. Computer-readable media herein can be any available computer-readable storage media or communication media that can be accessed by a computer.
Communication media includes media by which communication signals, including, for example, computer readable instructions, data structures, program modules, or other data, are transmitted from one system to another. Communication media may include conductive transmission media such as cables and wires (e.g., fiber optics, coaxial, etc.) and wireless (non-conductive transmission) media capable of propagating energy waves such as acoustic, electromagnetic, RF, microwave, and infrared. Computer readable instructions, data structures, program modules, or other data may be embodied in a modulated data signal, for example, in a wireless medium such as a carrier wave or similar mechanism such as is embodied as part of spread spectrum techniques. The term "modulated data signal" means a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal. The modulation may be analog, digital or hybrid modulation techniques.
By way of example, and not limitation, computer-readable storage media may include volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer-readable instructions, data structures, program modules or other data. For example, computer-readable storage media include, but are not limited to, volatile memory such as random access memory (RAM, DRAM, SRAM); and non-volatile memory such as flash memory, various read-only memories (ROM, PROM, EPROM, EEPROM), magnetic and ferromagnetic/ferroelectric memories (MRAM, FeRAM); and magnetic and optical storage devices (hard disk, tape, CD, DVD); or other now known media or later developed that can store computer-readable information/data for use by a computer system.
An embodiment according to the present application comprises an apparatus comprising a memory for storing computer program instructions and a processor for executing the program instructions, wherein the computer program instructions, when executed by the processor, trigger the apparatus to perform a method and/or a solution according to the aforementioned embodiments of the present application.
It will be evident to those skilled in the art that the present application is not limited to the details of the foregoing illustrative embodiments, and that the present application may be embodied in other specific forms without departing from the spirit or essential attributes thereof. The present embodiments are therefore to be considered in all respects as illustrative and not restrictive, the scope of the application being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference sign in a claim should not be construed as limiting the claim concerned. Furthermore, it is obvious that the word "comprising" does not exclude other elements or steps, and the singular does not exclude the plural. A plurality of units or means recited in the apparatus claims may also be implemented by one unit or means in software or hardware. The terms first, second, etc. are used to denote names, but not any particular order.
Various aspects of various embodiments are defined in the claims. These and other aspects of the various embodiments are specified in the following numbered clauses:
1. a method for editing video through collaborative shooting, wherein the method is applied to a first user device, and the method comprises the following steps:
responding to a collaborative video shooting initiating operation executed by a first user, and establishing a video shooting space, wherein the video shooting space comprises the first user and at least one second user;
responding to shooting operation and/or editing operation executed by the first user in the video shooting space, and uploading at least one piece of first video clip information obtained by the first user to a video clip library corresponding to the video shooting space in network equipment;
generating target video information according to video clip sequence information determined from the at least one video clip information in response to a collaborative editing operation performed by the first user and the at least one second user in the video shooting space for the at least one video clip information in the video clip library, wherein the at least one video clip information includes the at least one first video clip information.
2. The method of clause 1, wherein the at least one video clip information further comprises at least one second video clip information obtained by the at least one second user filming editing.
3. The method of clause 1, wherein the establishing a video capture space in response to the collaborative video capture initiation operation performed by the first user comprises:
responding to a collaborative video shooting initiating operation executed by a first user, and establishing a video shooting space according to identification information of at least one second user invited by the first user in the collaborative video shooting initiating operation, wherein the video shooting space comprises the first user and the at least one second user.
4. The method according to clause 3, wherein the establishing a video capturing space according to the identification information of at least one second user invited by the first user in the collaborative video capturing initiating operation in response to the collaborative video capturing initiating operation performed by the first user, includes:
responding to a collaborative video shooting initiating operation executed by a first user, generating a collaborative video shooting instruction, and sending the collaborative video shooting instruction to network equipment, wherein the collaborative video shooting instruction comprises identification information of one or more second users invited by the first user in the collaborative video shooting initiating operation;
receiving video shooting space establishment information returned by the network equipment, wherein the video shooting space establishment information comprises identification information of at least one second user in the one or more second users;
and establishing a video shooting space according to the video shooting space establishing information, wherein the video shooting space comprises the first user and at least one second user.
5. The method of clause 1, wherein the establishing a video capture space in response to the collaborative video capture initiation operation performed by the first user comprises:
responding to a cooperative video shooting instruction initiated by a first user, and establishing a video shooting space, wherein the video shooting space comprises the first user;
responding to a user invitation operation executed by the first user aiming at the video shooting space, and adding at least one second user invited by the first user in the user invitation operation into the video shooting space.
6. The method according to clause 5, wherein the joining, in response to a user invitation operation performed by the first user with respect to the video capturing space, at least one second user invited by the first user in the user invitation operation to the video capturing space includes:
responding to a user invitation operation executed by the first user aiming at the video shooting space, generating user invitation request information, and sending the user invitation request information to network equipment, wherein the user invitation request information comprises identification information of one or more second users invited by the first user in the user invitation operation;
receiving user invitation feedback information returned by the network equipment, wherein the user invitation feedback information comprises identification information of at least one second user in the one or more second users;
joining the at least one second user to the video capture space.
7. The method of clause 1, wherein the method further comprises:
and establishing a real-time voice channel of the first user and the at least one second user in the video shooting space.
8. The method according to clause 1, wherein if the state of the first user in the video capturing space is a video collaborative editing state;
wherein the method further comprises:
and presenting at least one piece of video clip information in the video clip library on a collaborative editing interface in the video shooting space.
9. The method of clause 8, wherein the method further comprises:
and responding to a first preset trigger operation executed by the first user in the collaborative editing interface, switching the state of the first user in the video shooting space to a video shooting state, and presenting a shooting interface in the video shooting space.
10. The method according to clause 9, wherein the uploading, in response to a shooting operation and/or an editing operation performed by the first user in the video shooting space, at least one piece of first video clip information obtained by the first user to a video clip library corresponding to the video shooting space in a network device, includes:
responding to a video shooting operation executed by the first user on the shooting interface, and uploading at least one piece of first video clip information obtained by shooting of the first user to a video clip library corresponding to the video shooting space in network equipment.
11. The method of clause 9, wherein, if one or more targeted second users of the at least one second user have entered the capture interface:
wherein the method further comprises:
and sending the current shooting picture of the shooting interface to the one or more target second users in real time so as to present the current shooting picture in real time in a video shooting space of second user equipment of the one or more target second users.
12. The method of clause 11, wherein the method further comprises:
and presenting the identification information of the one or more target second users on the shooting interface.
13. The method of clause 8, wherein the method further comprises:
in response to a second preset triggering operation executed by the first user on the collaborative editing interface for target video clip information in the at least one piece of video clip information, switching the state of the first user in the video shooting space to a video clip editing state for the target video clip information, and presenting the editing interface of the target video clip information in the video shooting space.
14. The method according to clause 13, wherein the uploading of the at least one piece of first video clip information obtained by the first user to a video clip library corresponding to the video capturing space in a network device in response to a capturing operation and/or an editing operation performed by the first user in the video capturing space comprises:
responding to the video editing operation executed by the first user on the editing interface aiming at the target video clip information, and uploading the edited target video clip information to the video clip library.
15. The method of clause 13, wherein if one or more target second users of the at least one second user have entered the editing interface, the video editing operation comprises one or more video editing action messages;
wherein the method further comprises:
and sending each piece of video editing action information in the one or more pieces of video editing action information to the one or more target second users in real time, so as to present the target video clip information and the one or more pieces of video editing action information in real time in a video shooting space of second user equipment of the one or more target second users.
16. The method of clause 15, wherein the method further comprises:
and presenting the identification information of the one or more target second users on the editing interface.
17. The method of clause 8, wherein the method further comprises:
and presenting the identification information of each second user in the at least one second user and the state information of each second user in the video shooting space on the collaborative editing interface.
18. The method of clause 17, wherein the method further comprises:
responding to a third preset trigger operation executed by the first user aiming at the identification information of a target second user in the at least one second user, and receiving and presenting a current shooting picture of the target second user in real time if the state of the target second user in the video shooting space is a video shooting state; and if the state of the target second user in the shooting space is a video clip editing state aiming at the target video clip information, presenting the target video clip information, and receiving and presenting one or more pieces of video editing action information executed by the target second user aiming at the target video clip information in real time.
19. The method according to clause 8, wherein the generating target video information from the video clip sequence information determined from the at least one video clip information in response to the collaborative editing operation performed by the first user and the at least one second user in the video shooting space for the at least one video clip information in the video clip library comprises:
in response to a first collaborative editing operation executed by the first user on the collaborative editing interface for at least one piece of video clip information in the video clip library and/or received second collaborative editing operation related information corresponding to one or more second users of the at least one second user, determining video clip sequence information composed of one or more pieces of video clip information in the at least one piece of video clip information, and generating target video information according to the video clip sequence information.
20. The method according to clause 19, wherein the second collaborative editing operation-related information includes at least one of operation object information, operation content information, and operation state information of a second collaborative editing operation performed by each of the one or more second users with respect to the at least one piece of video clip information.
21. The method according to clause 19, wherein the determining video clip sequence information consisting of the at least one piece of video clip information and one or more pieces of video clip information, and generating the target video information according to the video clip sequence information comprises:
and determining a plurality of pieces of video clip sequence information consisting of one or more pieces of video clip information in the at least one piece of video clip information, and generating target video information corresponding to the video clip sequence information for each piece of video clip sequence information.
22. The method of clause 19, wherein the generating target video information from the video clip sequence information comprises:
and responding to a preset video collaborative editing end condition, and generating target video information according to the video segment sequence information.
23. The method according to clause 22, wherein the generating target video information from the video clip sequence information in response to a predetermined video collaborative editing end condition comprises:
and responding to the cooperative video shooting ending operation executed by the first user, and generating target video information according to the video clip sequence information.
24. An apparatus for editing video in cooperation with shooting, the apparatus comprising:
a processor; and
a memory arranged to store computer executable instructions that, when executed, cause the processor to perform the method of any of clauses 1 to 23.
25. A computer-readable medium storing instructions that, when executed, cause a computer to perform the operations of the method of any of clauses 1 to 23.

Claims (10)

1. A method for editing video through collaborative shooting, wherein the method is applied to a first user device, and the method comprises the following steps:
responding to a collaborative video shooting initiating operation executed by a first user, and establishing a video shooting space, wherein the video shooting space comprises the first user and at least one second user;
responding to shooting operation and/or editing operation executed by the first user in the video shooting space, and uploading at least one piece of first video clip information obtained by the first user to a video clip library corresponding to the video shooting space in network equipment;
generating target video information according to video clip sequence information determined from the at least one video clip information in response to a collaborative editing operation performed by the first user and the at least one second user in the video shooting space for the at least one video clip information in the video clip library, wherein the at least one video clip information includes the at least one first video clip information.
2. The method according to claim 1, wherein the at least one video clip information further comprises at least one second video clip information obtained by the at least one second user shooting editing.
3. The method of claim 1, wherein the establishing a video capture space in response to the collaborative video capture initiation operation performed by the first user comprises:
responding to a collaborative video shooting initiating operation executed by a first user, and establishing a video shooting space according to identification information of at least one second user invited by the first user in the collaborative video shooting initiating operation, wherein the video shooting space comprises the first user and the at least one second user.
4. The method of claim 3, wherein the establishing a video capture space according to identification information of at least one second user invited by a first user in the collaborative video capture initiation operation in response to the collaborative video capture initiation operation performed by the first user comprises:
responding to a collaborative video shooting initiating operation executed by a first user, generating a collaborative video shooting instruction, and sending the collaborative video shooting instruction to network equipment, wherein the collaborative video shooting instruction comprises identification information of one or more second users invited by the first user in the collaborative video shooting initiating operation;
receiving video shooting space establishment information returned by the network equipment, wherein the video shooting space establishment information comprises identification information of at least one second user in the one or more second users;
and establishing a video shooting space according to the video shooting space establishing information, wherein the video shooting space comprises the first user and at least one second user.
5. The method of claim 1, wherein the establishing a video capture space in response to the collaborative video capture initiation operation performed by the first user comprises:
responding to a cooperative video shooting instruction initiated by a first user, and establishing a video shooting space, wherein the video shooting space comprises the first user;
responding to a user invitation operation executed by the first user aiming at the video shooting space, and adding at least one second user invited by the first user in the user invitation operation into the video shooting space.
6. The method of claim 5, wherein the joining at least one second user invited by the first user in the user invitation operation to the video capturing space in response to the user invitation operation performed by the first user for the video capturing space comprises:
responding to a user invitation operation executed by the first user aiming at the video shooting space, generating user invitation request information, and sending the user invitation request information to network equipment, wherein the user invitation request information comprises identification information of one or more second users invited by the first user in the user invitation operation;
receiving user invitation feedback information returned by the network equipment, wherein the user invitation feedback information comprises identification information of at least one second user in the one or more second users;
joining the at least one second user to the video capture space.
7. The method of claim 1, wherein the method further comprises:
and establishing a real-time voice channel of the first user and the at least one second user in the video shooting space.
8. The method according to claim 1, wherein if the state of the first user in the video capturing space is a video collaborative editing state;
wherein the method further comprises:
and presenting at least one piece of video clip information in the video clip library on a collaborative editing interface in the video shooting space.
9. The method of claim 8, wherein the method further comprises:
and responding to a first preset trigger operation executed by the first user in the collaborative editing interface, switching the state of the first user in the video shooting space to a video shooting state, and presenting a shooting interface in the video shooting space.
10. The method of claim 9, wherein the uploading at least one piece of first video clip information obtained by the first user to a video clip library corresponding to the video shooting space in a network device in response to a shooting operation and/or an editing operation performed by the first user in the video shooting space comprises:
responding to a video shooting operation executed by the first user on the shooting interface, and uploading at least one piece of first video clip information obtained by shooting of the first user to a video clip library corresponding to the video shooting space in network equipment.
CN202011371359.1A 2020-11-30 2020-11-30 Method and equipment for collaboratively shooting and editing video Active CN112533061B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011371359.1A CN112533061B (en) 2020-11-30 2020-11-30 Method and equipment for collaboratively shooting and editing video

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011371359.1A CN112533061B (en) 2020-11-30 2020-11-30 Method and equipment for collaboratively shooting and editing video

Publications (2)

Publication Number Publication Date
CN112533061A true CN112533061A (en) 2021-03-19
CN112533061B CN112533061B (en) 2023-03-21

Family

ID=74995298

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011371359.1A Active CN112533061B (en) 2020-11-30 2020-11-30 Method and equipment for collaboratively shooting and editing video

Country Status (1)

Country Link
CN (1) CN112533061B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115208980A (en) * 2022-06-21 2022-10-18 咪咕音乐有限公司 Video color ring processing method, device, terminal and readable storage medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104580438A (en) * 2014-12-30 2015-04-29 宋小民 Method for co-browsing and editing webpage by using more than two intelligent terminals
CN104615586A (en) * 2015-01-21 2015-05-13 上海理工大学 Real-time cooperative editing system
US20150149906A1 (en) * 2013-11-26 2015-05-28 Google Inc. Collaborative Video Editing in a Cloud Environment
CN107734257A (en) * 2017-10-25 2018-02-23 北京玩拍世界科技有限公司 One population shoots the video image pickup method and device
CN108718383A (en) * 2018-04-24 2018-10-30 天津字节跳动科技有限公司 Cooperate with image pickup method, device, storage medium and terminal device
US10198714B1 (en) * 2013-06-05 2019-02-05 Google Llc Media content collaboration
CN111245801A (en) * 2020-01-04 2020-06-05 深圳市编玩边学教育科技有限公司 Online interactive collaboration system
CN111866434A (en) * 2020-06-22 2020-10-30 阿里巴巴(中国)有限公司 Video co-shooting method, video editing device and electronic equipment

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10198714B1 (en) * 2013-06-05 2019-02-05 Google Llc Media content collaboration
US20150149906A1 (en) * 2013-11-26 2015-05-28 Google Inc. Collaborative Video Editing in a Cloud Environment
CN104580438A (en) * 2014-12-30 2015-04-29 宋小民 Method for co-browsing and editing webpage by using more than two intelligent terminals
CN104615586A (en) * 2015-01-21 2015-05-13 上海理工大学 Real-time cooperative editing system
CN107734257A (en) * 2017-10-25 2018-02-23 北京玩拍世界科技有限公司 One population shoots the video image pickup method and device
CN108718383A (en) * 2018-04-24 2018-10-30 天津字节跳动科技有限公司 Cooperate with image pickup method, device, storage medium and terminal device
CN111245801A (en) * 2020-01-04 2020-06-05 深圳市编玩边学教育科技有限公司 Online interactive collaboration system
CN111866434A (en) * 2020-06-22 2020-10-30 阿里巴巴(中国)有限公司 Video co-shooting method, video editing device and electronic equipment

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
软吧下载 网站编辑: "抖音合拍多人怎么制作 合拍多人视频教程", 《HTTPS://WWW.RUAN8.COM/GONGLUE/5238.HTML》 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115208980A (en) * 2022-06-21 2022-10-18 咪咕音乐有限公司 Video color ring processing method, device, terminal and readable storage medium

Also Published As

Publication number Publication date
CN112533061B (en) 2023-03-21

Similar Documents

Publication Publication Date Title
CN110795004B (en) Social method and device
CN110417641B (en) Method and equipment for sending session message
CN112822431B (en) Method and equipment for private audio and video call
CN107770046B (en) Method and equipment for picture arrangement
WO2021218646A1 (en) Interaction method and apparatus, and electronic device
CN112533061B (en) Method and equipment for collaboratively shooting and editing video
CN109660940B (en) Method and equipment for generating information
CN112261337B (en) Method and equipment for playing voice information in multi-person voice
CN112822430B (en) Conference group merging method and device
CN112818719B (en) Method and equipment for identifying two-dimensional code
WO2022116033A1 (en) Collaborative operation method and apparatus, and terminal and storage medium
CN112822419A (en) Method and equipment for generating video information
CN113329237B (en) Method and equipment for presenting event label information
EP4220368A1 (en) Multimedia data processing method and apparatus, and device, computer-readable storage medium and computer program product
CN112788004B (en) Method, device and computer readable medium for executing instructions by virtual conference robot
CN111859009A (en) Method and equipment for providing audio information
CN114339439B (en) Live broadcast method and device based on social group chat
CN112423112B (en) Method and equipment for releasing video information
CN114338579B (en) Method, equipment and medium for dubbing
CN115913804A (en) Method, apparatus, medium and program product for joining chat room
CN115906772A (en) Method, device, medium and program product for collaborative editing
US20240331733A1 (en) Method, appartus, device and medium for video editing
CN112769676B (en) Method and equipment for providing information in group
US20240118855A1 (en) Video generation method, apparatus, system, device and storage medium
CN115544378A (en) Method, device, medium and program product for collaboration

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant