WO2022194031A1 - 视频的处理方法、装置、电子设备和存储介质 - Google Patents

视频的处理方法、装置、电子设备和存储介质 Download PDF

Info

Publication number
WO2022194031A1
WO2022194031A1 PCT/CN2022/080276 CN2022080276W WO2022194031A1 WO 2022194031 A1 WO2022194031 A1 WO 2022194031A1 CN 2022080276 W CN2022080276 W CN 2022080276W WO 2022194031 A1 WO2022194031 A1 WO 2022194031A1
Authority
WO
WIPO (PCT)
Prior art keywords
video
target video
synthesized
page
user
Prior art date
Application number
PCT/CN2022/080276
Other languages
English (en)
French (fr)
Inventor
薛如峰
凌绪枫
付豪
吴锦超
毛红云
高世超
Original Assignee
北京字跳网络技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京字跳网络技术有限公司 filed Critical 北京字跳网络技术有限公司
Priority to EP22770386.5A priority Critical patent/EP4307694A1/en
Publication of WO2022194031A1 publication Critical patent/WO2022194031A1/zh
Priority to US18/468,508 priority patent/US20240005961A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • H04N21/4316Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations for displaying supplemental content in a region of the screen, e.g. an advertisement in a separate window
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B27/00Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
    • G11B27/02Editing, e.g. varying the order of information signals recorded on, or reproduced from, record carriers
    • G11B27/031Electronic editing of digitised analogue information signals, e.g. audio or video signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/27Server based end-user applications
    • H04N21/274Storing end-user multimedia data in response to end-user request, e.g. network recorder
    • H04N21/2743Video hosting of uploaded data from client
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/44004Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving video buffer management, e.g. video decoder buffer or video display buffer
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • H04N21/47205End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for manipulating displayed content, e.g. interacting with MPEG-4 objects, editing locally
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/485End-user interface for client configuration

Definitions

  • the present disclosure relates to the field of computer technology, for example, to a video processing method, apparatus, electronic device, and storage medium.
  • Users can edit and synthesize videos locally through a video editor, and upload the synthesized videos to the cloud of the video platform for publishing.
  • the present disclosure provides a video processing method, apparatus, electronic device and storage medium, so as to simplify the operations required to modify the uploaded video and reduce the waiting time of the user.
  • the present disclosure provides a video processing method, including:
  • the present disclosure also provides a video processing device, including:
  • the first display module is configured to display the first clipping page of the first target video, and obtain the first clipping information of the user performing the first clipping operation on the first target video in the first clipping page, wherein the The first target video is stored in the cloud;
  • a first receiving module configured to receive a first trigger operation acting on a switching control in the first clipping page
  • a second display module configured to display a publishing page in response to the first trigger operation
  • a second receiving module configured to receive a second trigger operation acting on the publishing control in the publishing page
  • a video publishing module configured to send the first clip information to the cloud in response to the second trigger operation, so that the cloud can synthesize the first target video into a first clip according to the first clip information.
  • Two target videos and publish them.
  • the present disclosure also provides an electronic device, comprising:
  • processors one or more processors
  • memory arranged to store one or more programs
  • the one or more processors When the one or more programs are executed by the one or more processors, the one or more processors implement the above-mentioned video processing method.
  • the present disclosure also provides a computer-readable storage medium on which a computer program is stored, and when the program is executed by a processor, implements the above-mentioned video processing method.
  • FIG. 1 is a schematic flowchart of a video processing method according to an embodiment of the present disclosure
  • FIG. 2 is a schematic diagram of a first editing page according to an embodiment of the present disclosure
  • FIG. 3 is a schematic diagram of a publishing page according to an embodiment of the present disclosure.
  • FIG. 4 is a schematic diagram of a publishing information page according to an embodiment of the present disclosure.
  • FIG. 5 is a schematic flowchart of another video processing method provided by an embodiment of the present disclosure.
  • FIG. 6 is a schematic diagram of a second editing page according to an embodiment of the present disclosure.
  • FIG. 7 is a schematic diagram of an upload page according to an embodiment of the present disclosure.
  • FIG. 8 is a structural block diagram of a video processing apparatus according to an embodiment of the present disclosure.
  • FIG. 9 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure.
  • method embodiments of the present disclosure may be performed in different orders and/or in parallel. Furthermore, method embodiments may include additional steps and/or omit performing the illustrated steps. The scope of the present disclosure is not limited in this regard.
  • the term “including” and variations thereof are open-ended inclusions, ie, "including but not limited to”.
  • the term “based on” is “based at least in part on.”
  • the term “one embodiment” means “at least one embodiment”; the term “another embodiment” means “at least one additional embodiment”; the term “some embodiments” means “at least some embodiments”. Relevant definitions of other terms will be given in the description below.
  • FIG. 1 is a schematic flowchart of a video processing method provided by an embodiment of the present disclosure.
  • the method may be performed by a video processing apparatus, wherein the apparatus may be implemented by software and/or hardware, and may be configured in an electronic device, for example, the apparatus may be configured in a mobile phone or a tablet computer.
  • the video processing method provided by the embodiment of the present disclosure is applicable to the scene of editing and publishing the uploaded video.
  • the video processing method provided in this embodiment may include:
  • the first target video can be understood as a video uploaded by the user to the cloud, such as a video material to be synthesized or a synthesized video.
  • the first target video can be synthesized locally by the user.
  • the video uploaded to the cloud or the video synthesized by the user in the cloud may be a published or unpublished video.
  • the first editing page can be used as a editing page for the user to edit the first target video; the first editing operation can be the editing operation performed by the user on the first target video in the first editing page; the first editing The information may be information of the first editing operation performed by the user in the first editing page.
  • the electronic device can obtain the first clipping information of the clipping operation that the user intends (expects) to perform on the first target video through the first clipping page.
  • the first target video is not clipped locally according to the first clip information (including not synthesizing the second target video), but the first clip information is sent to the cloud, so that the cloud can use the first clip information according to the first clip information. Editing the first target video (including synthesizing the first target video into a second target video).
  • FIG. 2 is a schematic diagram of a first editing page provided in this embodiment.
  • the first editing page can be provided with a control area 20 and a main display area 21, and the control area 20 can be provided with a material library control, a soundtrack control, a text control, a sticker control, a subtitle control, a filter control,
  • the main display area 21 may be provided with a first area 210 , a preview area 211 , a second area 212 and a clip area 213 .
  • the user can instruct the electronic device to display the video material available for the user to select in the first area 210 by triggering the material library control/soundtrack control/text control/sticker control/filter control/transition control/special effect control in the control area 20 (such as displaying online video materials provided by developers and/or local video materials imported by users)/soundtrack/text styles/stickers/filters/transition videos/special effects, and display video materials/soundtracks/text in the second area Editing controls for styles/stickers/filters/transition videos/special effects for users to edit, or, by triggering the subtitle control to input the subtitles of the video in the first area; and/or the newly added video material, such as editing the video by dragging the left and right borders of the video track, etc., and the preview effect of the second target video to be synthesized can be viewed in the preview area 212 .
  • the material library control/soundtrack control/text control/sticker control/filter control/transition control/special effect control in the control area 20 such as displaying online video materials provided
  • the electronic device after the user uploads the first target video to the cloud, or after the user publishes the first target video, if he is not satisfied with the uploaded or published first target video, he can instruct the electronic device to display the first target video.
  • a first clip page of a target video, and the first clip page is used to modify the first target video.
  • the user can control the electronic device to display the release of the first target video when he wants to modify the first target video.
  • page by triggering the first modification video control 30 in the publishing page to instruct the electronic device to display the first modification mode selection window 31, and trigger the first online modification control 310 in the first modification mode selection window; correspondingly, the electronic device
  • the currently displayed page can be switched from the publishing page of the first target video to the first editing page of the first target video, as shown in FIG. 2; Modify the first target video in a clip page.
  • FIG. 2 Modify the first target video in a clip page.
  • the publishing page of the first target video may also display basic information of the first target video when it is published, such as the title, cover, introduction, type, participation activity and/or of the first target video.
  • Content synchronization information, etc. for the user to view and edit; the first modification mode selection window 31 may also be provided with a first re-upload control 311, so that the user can re-upload a new first re-upload by triggering the first re-upload control 311.
  • a target video A target video.
  • the user can control the electronic device to display the publishing information page (ie, the publishing details page) of the first target video,
  • the electronic device is instructed to display the second modification method selection window 41 by triggering the second modification video control 40 in the release information page, and triggers the second online modification control 410 in the second modification mode selection window 41; correspondingly, the electronic device
  • the currently displayed page can be switched from the release information page of the first target video to the first clip page of the first target video, as shown in FIG. 2;
  • the first target video is modified in the first editing page.
  • the release information page of the first target video can also display the basic information when the first target video is released, the original title when the first target video is released, and the current video title of the first target video;
  • a second re-upload control 411 may also be set in the second modification mode selection window 41 , so that the user can re-upload a new first target video by triggering the second re-upload control 411 .
  • the method before the displaying the first clip page of the first target video, the method further includes: synthesizing the video material to be synthesized selected by the user into the first target video, and the video material to be synthesized includes the first target video.
  • the video material to be synthesized and/or the second video material to be synthesized, the first video material to be synthesized is stored in the cloud, and the second video material to be synthesized is stored locally.
  • the video material to be synthesized may be understood as the video material selected by the user and to be synthesized, which may include the first video material to be synthesized stored in the cloud and/or the second video material to be synthesized stored locally.
  • the first target video may be a video synthesized by the user.
  • the user can synthesize the first target video based on the local video material and/or the video material stored in the cloud.
  • the user can synthesize the first target video locally and upload it to the cloud, or combine the local second target video.
  • the synthesized video material is uploaded to the cloud, and the first target video is synthesized in the cloud based on the uploaded second video material to be synthesized and/or the first video material to be synthesized provided by the developer, which is not limited in this embodiment.
  • the switching control in the first editing page can be understood as a control set in the first editing page and used for triggering by the user to instruct the electronic device to switch the currently displayed page from the first editing page to the publishing page.
  • the first trigger operation may be any operation that triggers the switching control in the first editing page, such as a click operation acting on the switching control in the first editing page;
  • the publishing page may be a second trigger for the user to perform.
  • the operation is to instruct the electronic device to synthesize and publish the second target video page through the cloud, that is, the page on the electronic device that is switched from the first editing page.
  • the electronic device can edit the first editing page.
  • the information is sent to the cloud, and the cloud is requested to synthesize and publish the video.
  • the electronic device displays the first clipping page of the first target video; the user performs a clipping operation on the first target video in the first clipping page, and when the first target video after the clipping operation is to be published, Trigger the switching control 23 in the first clipping page; when the electronic device detects that the user triggers the switching control 23 in the first clipping page, it determines that the first triggering operation is received, and switches the currently displayed page with the first clipping page to publish page (similar to the release page of the first target video); thus, the user can edit the release information such as the basic information when the second target video to be synthesized is released in the release page, and can trigger the release page after the editing is completed.
  • the publishing control instructs the electronic device to synthesize and publish the second target video in the cloud.
  • the publishing control on the publishing page can be used for triggering by the user to instruct the electronic device to synthesize the first target video into the second target video in the cloud and publish it;
  • the second triggering operation can be an operation that triggers the publishing control on the publishing page, For example, the operation of clicking the publishing control in the publishing page, etc.
  • the user can instruct the electronic device to synthesize and publish the second target video through the cloud by triggering the publishing control on the publishing page.
  • the electronic device displays a publishing page, and when the user wants to publish the second target video, he can trigger the publishing control in the publishing page; correspondingly, when the electronic device detects that the user triggers the publishing control, it can determine that the user has received a target video for the second target video. The first post action for the target video.
  • the second target video may be a video obtained by processing the first target video according to the first clip information of the first target video by the user.
  • the user may edit the first target video by means of cloud editing, and obtain the first editing information input by the user through the electronic device (local terminal).
  • the corresponding editing and synthesizing operation but when receiving the trigger operation of the user triggering the publishing control on the publishing page, the first editing information is sent to the cloud, and the cloud performs the editing and synthesizing.
  • Editing operation you can perform the video publishing operation without waiting for the video composition. After performing the video publishing operation, you can switch pages according to your needs. You do not need to stay on the publishing page to wait for the video composition, and you do not need to upload the composited video again, which can simplify the video.
  • the operations required in the editing and publishing process reduce the user's waiting time and improve the user experience.
  • the electronic device may firstly send the acquired first editing information of the user performing the first editing operation on the first target video on the first editing page to the electronic device.
  • the cloud can synthesize the first target video into a second target video according to the first clip information, such as combining the first target video with the video material newly added by the user,
  • the soundtrack, stickers, subtitles, filters, and/or transition videos are synthesized to obtain the second target video, and the second target video is published, for example, the second target video is directly published to the current video platform, or the third target video is published.
  • the second target video is released to the current video platform and other video platforms selected by the user.
  • a first editing page of a first target video stored in the cloud is displayed, and first editing information of a user performing a first editing operation on the first target video in the first editing page is obtained, Receive a first trigger operation acting on the switch control in the first clip page, display the release page in response to the first trigger operation, receive a second trigger operation acting on the release control in the release page, and respond to the second trigger
  • the operation is to send the acquired first clip information to the cloud, so that the cloud can synthesize the first target video into a second target video according to the first clip information and publish it.
  • this embodiment supports the user to perform cloud editing on the uploaded first target video, and when receiving the user's operation of publishing the second target video, the first target video is synthesized into the second target video in the cloud , so that the user does not need to download the first target video to the local for editing or wait for the second target video to be synthesized, and does not need to upload the synthesized second target video again, which can simplify the operations required for the user to modify the video located in the cloud, And reduce the user's waiting time.
  • FIG. 5 is a schematic flowchart of another video processing method provided by an embodiment of the present disclosure.
  • the solution in this embodiment can be combined with one or more optional solutions in the above-mentioned embodiments.
  • the synthesizing the video material to be synthesized selected by the user into the first target video includes: receiving a publishing operation for the first target video; in response to the publishing operation, requesting the cloud to The video material to be synthesized is synthesized into the first target video and released.
  • synthesizing the video material to be synthesized selected by the user into the first target video includes: receiving a save operation for the first target video; in response to the save operation, requesting the cloud to store the user-selected video material.
  • the video material to be synthesized is synthesized into the first target video and stored.
  • the video material to be synthesized includes a second video material to be synthesized
  • the synthesizing the video material to be synthesized selected by the user into the first target video includes: receiving an upload operation for the first target video ; in response to the upload operation, obtain the first time length required to upload the second video material to be synthesized selected by the user to the cloud, and synthesize the second video material selected by the user to be synthesized into the first target video and upload the first target video to the second time length required for the cloud; if the first time length is less than or equal to the second time length, upload the second video material to be synthesized to the cloud, Synthesize the uploaded second video material to be synthesized into the first target video in the cloud; if the first time length is greater than the second time length, then synthesize the second video material to be synthesized into the first target video video, and upload the first target video to the cloud.
  • the video processing method provided in this embodiment may include:
  • S202 In response to the publishing operation, request the cloud to synthesize the video material to be synthesized selected by the user into the first target video and publish it, and execute S209, where the video material to be synthesized includes the first video material to be synthesized and/or The second video material to be synthesized, the first video material to be synthesized is stored in the cloud, and the second video material to be synthesized is stored locally.
  • the publishing operation for the first target video may be an operation of publishing the first target video, such as a triggering operation acting on a publishing control on the publishing page of the first target video, such as clicking on a publishing control on the publishing page of the first target video. operate.
  • the electronic device may edit the video material selected by the user (such as the online video material provided by the developer and/or the local video material uploaded by the user to the cloud) online based on the user's corresponding editing operation, and then publish the video material on the user's website.
  • the first target video is used, at least one video material is synthesized into the first target video through the cloud according to the user's clip information and published, so that the operation of uploading the first target video to the cloud does not need to be performed, and the user does not need to wait for the first target video.
  • Video upload ; and, after the user editing is completed, the release page of the first target video is displayed, instead of synthesizing at least one video material to be synthesized after the user's editing into the first target video and then displaying the release page of the first target video,
  • the user can perform the operation of publishing the first target video after editing each video material to be synthesized, without waiting for the synthesis of the first target video, thereby greatly reducing the user's waiting time when making and publishing videos. Increase the enthusiasm of users to create and publish videos.
  • the user wants to publish the first target video after editing each to-be-synthesized video material, he can instruct the electronic display of the publishing page of the first target video, as shown in FIG. 3, and trigger the publishing control 32 in the publishing page; correspondingly Yes, when the electronic device detects that the user triggers the publishing control 32 on the publishing page, it can determine that a publishing operation for the first target video has been received, and in response to the publishing operation, request the cloud to synthesize at least one video material to be synthesized into the first target video. a target video and publish the first target video.
  • the save operation for the first target video can be understood as a trigger operation for synthesizing and saving the first target video in the cloud, such as the operation of clicking the save control of the first target video, the save control can be edited after the user has edited each video material. Display when the operation is performed but the first target video is not released, such as when the user does not perform the release operation for the first target video but wants to close the release page of the first target video (such as switching the currently displayed release page of the first target video to other pages) and/or when the user wants to close the second clip page of the video material to be synthesized.
  • the electronic device displays the publishing page of the first target video.
  • the close control (not shown in FIG. 3 ) in the publishing page or trigger the page control (not shown in FIG. 3 ) in the publishing page for switching to other pages. ).
  • the electronic device When monitoring that the user triggers the close control/page control in the publishing page of the first target video, the electronic device displays a save prompt window to prompt the user to save the first target video, and when monitoring that the user clicks the save control in the save prompt window , close the publishing page of the first target video or switch the publishing page of the first target video to the page corresponding to the page control triggered by the user, and request the cloud to synthesize the video material to be synthesized selected by the user into the first target video and perform Store; when it is detected that the user clicks the non-save control in the save prompt window, the publishing page of the first target video is directly closed or the publishing page of the first target video is switched to the page corresponding to the page control triggered by the user.
  • the electronic device may also automatically request the cloud to edit the at least one pending video when the user closes the second editing page of the video material to be synthesized, when the user closes the publishing page of the first target video, or switches the publishing page of the first target video to another page.
  • the synthesized video material is synthesized into the first target video and saved, so that the user does not need to save the first target video, thereby simplifying the operations that the user needs to perform.
  • the upload operation may be understood as an operation of uploading the first target video, such as a trigger operation for displaying a publishing page of the first target video or a trigger operation for publishing the first target video.
  • the second video material to be synthesized may be understood as the local video material that the user has edited locally and intends to use it to make the first target video.
  • the first time length may be the time length required to directly upload the at least one second video material to be synthesized (and the user's clip information) to the cloud, and the second time length may be the time required to synthesize the at least one second video material to be synthesized locally.
  • the time length required for uploading the synthesized first target video to the cloud for the first target video, the first time length and the second time length can be determined according to the size of each video material and the network speed at the current moment.
  • the electronic device may upload the first target video when receiving a trigger operation for displaying a publishing page of the first target video.
  • the electronic device displays a second editing page of the second video material to be synthesized stored locally; the user edits each second video material to be synthesized in the second editing page, and when the editing is completed, the first video material to be released is edited.
  • the switching control in the second editing page is triggered; correspondingly, when the electronic device detects that the user triggers the switching control in the second editing page, it determines that an upload operation for the first target video is received, and the current display
  • the page is switched from the second clip page to the release page of the first target video, and the clip information of the user and the first length of time required to upload the at least one second video to be synthesized to the cloud are obtained, and the at least one first video is uploaded locally. 2.
  • the second length of time required for synthesizing the video material to be synthesized into the first target video and uploading the first target video to the cloud so as to upload the first target video in an uploading manner with a shorter required time length.
  • the electronic device may also upload the second target video when receiving a triggering operation for publishing the first target video.
  • the electronic device displays a second editing page of the second video material to be synthesized stored locally; the user edits each second video material to be synthesized in the second editing page, and after the editing is completed, triggers the The switching control in the second editing page; when monitoring that the user triggers the switching control in the second editing page, the electronic device determines that an upload operation for the first target video has been received, and switches the currently displayed page from the second editing page to the second editing page.
  • a release page of the target video thus, the user can edit the release information of the first target video in the release page of the first target video, and trigger the release control in the release page after editing is completed;
  • the electronic device is monitoring the When the user triggers the publishing control on the publishing page of the first target video, it is determined that an upload operation for the first target video has been received, and the first target video required for uploading the user's clip information and at least one second video material to be synthesized to the cloud is obtained. a length of time, and a second length of time required for locally synthesizing at least one second video material to be synthesized into a first target video and uploading the first target video to the cloud, so as to adopt a shorter time length.
  • the first target video is uploaded in the upload mode, and after the first target video is uploaded, the first target video is released.
  • the electronic device can determine the first time length and the second time length. If the first time length is less than or equal to the second time length, each second video material to be synthesized can be uploaded to the cloud, and the cloud can be requested to synthesize at least one second video material to be synthesized into the first target video ( That is, uploading first and then synthesizing) and publishing; if the first time length is greater than the second time length, at least one second video material to be synthesized can be synthesized into the first target video locally, and the synthesized first target video can be uploaded. Go to the cloud and request the cloud to publish the first target video (ie, compose first and then upload).
  • the sub-uploading method with the shortest required time length can be used to upload each second video material to be synthesized.
  • the first The time length is the time length required by the sub-upload method with the shortest time required among the multiple sub-upload methods; when the method of synthesizing first and then uploading includes multiple sub-upload methods (such as uploading while synthesizing, compressing and then synthesizing upload, and after sharding) When re-synthesizing and uploading, etc.), the first target video can be synthesized and uploaded in the sub-upload method with the shortest required time length, and correspondingly, the second time length is required by the sub-upload method with the shortest required time among the multiple sub-upload methods. length of time.
  • the user can also locally edit the second local video material to be synthesized, and after the editing is completed, upload the second target video to the cloud.
  • the electronic device monitors the user's uploading During operation, the first target video may be uploaded in an uploading manner with the shortest required time length, thereby reducing the time required for uploading.
  • S209 Display the first clipping page of the first target video, and obtain first clipping information of the user performing the first clipping operation on the first target video in the first clipping page, and the first target video is stored in cloud.
  • the synthesizing the video material to be synthesized selected by the user into the first target video includes: displaying a second editing page of the video material to be synthesized selected by the user, and obtaining the second editing page of the user-selected video material to be synthesized.
  • the second editing information for performing the second editing operation on the video material to be synthesized in the page; the second editing information is sent to the cloud, so that the video material to be synthesized can be synthesized in the cloud according to the second editing information as the first target video.
  • the second editing page may be a page for the user to select and edit the video material to be synthesized (including the first video material to be synthesized and/or the second video material to be synthesized) required to generate the first target video, as shown in FIG. 6 .
  • the functions of the content displayed in the multiple areas and the multiple controls in the second editing page are similar to those of the first editing page, and will not be described in detail here; the second editing operation can be performed by the user on the second editing page
  • the clipping operation performed by the video material to be synthesized; the second clipping information may be the second clipping operation information performed by the user in the second clipping page.
  • the electronic device can synthesize the first target video through the cloud.
  • the electronic device displays a second clip page of the video material to be synthesized. Therefore, the user can edit the video material to be synthesized in the editing page, and trigger the switching control 60 in the second editing page after the editing is completed.
  • the electronic device records the user's second editing information for each video material to be synthesized in the second editing page, and displays the release of the first target video when monitoring that the user triggers the switching control 60 in the second editing page. page, as shown in Figure 3.
  • the user can trigger the publishing control 32 on the publishing page of the first target video.
  • the electronic device When the electronic device detects that the user triggers the publishing control 32 in the publishing page of the first target video, it can send the second clipping information to the cloud, so that the video material to be synthesized can be synthesized into the first clipping information in the cloud according to the second clipping information. target video and publish it.
  • the synthesizing the video material to be synthesized selected by the user into the first target video includes: displaying a second clip page of the video material to be synthesized selected by the user, Second editing information for performing a second editing operation on the video material to be synthesized in the editing page; and synthesizing the video material to be synthesized into the first target video according to the second editing information.
  • the electronic device may synthesize the first target video.
  • the electronic device displays a second clip page of the video material to be synthesized. Therefore, the user can edit the video material to be synthesized in the editing page, and trigger the switching control 60 in the second editing page after the editing is completed.
  • the electronic device records the user's second editing information for each video material to be synthesized in the second editing page, and displays the release of the first target video when monitoring that the user triggers the switching control 60 in the second editing page. page, as shown in Figure 3.
  • the user can trigger the publishing control 32 on the publishing page of the first target video.
  • the electronic device can synthesize the video material to be synthesized into the first target video according to the second clip information, and send the first target video to the cloud. , to publish the first target video on the cloud.
  • a user may generate a video online based on one or more video materials stored locally and/or in the cloud.
  • the electronic device displays an upload page.
  • the user can trigger the online production control 70 on the upload page.
  • the electronic device detects that the user triggers the online production control 70, it switches the currently displayed page from the upload page to the second editing page, as shown in FIG. 6;
  • the to-be-synthesized video material is edited, and after the editing is completed, the switch control 60 in the second editing page is triggered.
  • the electronic device When monitoring that the user triggers the switching control 60 in the second editing page, the electronic device switches the currently displayed page from the second editing page to the publishing page of the first target video, as shown in FIG. 3 . Furthermore, the user can edit the release information of the first target video on the release page, and instruct the electronic device to switch the currently displayed page back to the second online modification control 310 in the first modification mode selection window 31 by triggering the release page.
  • Clip page instructing the electronic device to request the cloud to publish the first target video by triggering the publishing control 32 in the publishing page, and instructing the electronic device to cut out the publishing page by triggering the page controls or closing controls of other pages in the publishing page.
  • the electronic device when monitoring that the user triggers the first online modification control 310, can switch the currently displayed page from the publishing page of the first target video to the second editing page; when monitoring that the user triggers the publishing page of the first target video
  • the publishing control 32 in the device when monitoring that the user triggers the publishing page of the first target video
  • the second editing information of the user in the second editing page is sent to the cloud, so that the cloud can synthesize at least one video material to be synthesized into the first target video and publish it according to the second editing information
  • a corresponding page switching operation is performed or the publishing page of the first target video is closed, and the second editing page of the user in the second editing page can be changed.
  • the clipping information is sent to the cloud, so that the cloud can synthesize at least one video material to be synthesized into a first target video according to the second clipping information and store it.
  • a user may upload a local video to the cloud for editing and/or publishing.
  • the electronic device displays an upload page displayed by the electronic device.
  • the user wants to upload a local video/video material, he can drag the video/video material to the uploading area 71 or click the uploading area 71 to select the local video/video material.
  • the electronic device takes the video/video material dragged or selected by the user as the first target video, uploads the first target video to the cloud, and switches the currently displayed page from the upload page. It is the publishing page of the first target video, as shown in FIG. 3 .
  • the user can edit the release information of the first target video on the release page, and trigger the release control 32 in the release page of the first target video when the first target video is to be released; correspondingly, the electronic device detects the user When the publishing control 32 on the publishing page of the first target video is triggered, the cloud may be requested to publish the first target video.
  • the user wants to edit the first target video he can trigger the first online modification control 310 in the first modification mode selection window 31 of the publishing page of the first target video to instruct the electronic device to switch the currently displayed page from the publishing page.
  • the first target video is clipped in the first clipping page, and after the clipping is completed, the switching control 23 in the first clipping page is triggered; electronic equipment
  • the currently displayed page can be switched from the publishing page of the first target video to the first editing page, and when monitoring that the user triggers the switching control 23 in the first editing page, Switch the currently displayed page from the first editing page to the publishing page of the second target video; the user can edit the publishing information of the second target video on the publishing page of the second target video, and after editing is completed, trigger the publishing of the second target video.
  • the publishing control on the publishing page correspondingly, when the electronic device detects that the user triggers the publishing control on the publishing page of the second target video, it can send the user's first editing information on the first editing page to the cloud, so as to The cloud combines the first target video into a second target video according to the first clip information and publishes it.
  • the user may perform video editing locally.
  • the user locally edits the local video material to be synthesized, and after the editing is completed, instructs the electronic device to display the release page of the first target video through a corresponding trigger operation.
  • the electronic device monitors the triggering operation that displays the publishing page of the first target video, it can display the publishing page of the first target video, as shown in FIG.
  • the required time length select the upload method with the shortest required time length to upload and synthesize (first synthesize and then upload or first upload and then synthesize) the first target video, and monitor the user to trigger the release of the first target video on the release page.
  • the cloud is requested to publish the first target video; or, when the electronic device detects a triggering operation that displays the publishing page of the first target video, the electronic device displays the publishing page of the first target video, and monitors that the user touches the first target video.
  • the publishing control 32 in the publishing page of the video is used, the time length required for uploading the first target video by each uploading method is obtained, and the uploading method with the shortest required time length is selected for uploading and synthesizing (synthesizing first and then uploading or first uploading and then uploading). Synthesize) the first target video, and request the cloud to publish the first target video.
  • the electronic device can edit each video material to be synthesized online based on the user's trigger operation, and when publishing the first target video, synthesize at least one video material to be synthesized into the first target video for publication; or You can edit and cut the video based on the user's trigger operation, and after the editing is completed, select the upload method that takes the shortest time to upload and synthesize the first target video, and publish the first target video, thereby reducing the time required for video production and publishing. The waiting time improves the user experience.
  • FIG. 8 is a structural block diagram of a video processing apparatus according to an embodiment of the present disclosure.
  • the apparatus may be implemented by software and/or hardware, and may be configured in an electronic device.
  • the apparatus may be configured in a mobile phone or a tablet computer, and the video may be processed by executing a video processing method.
  • the video processing apparatus provided in this embodiment may include: a first display module 801, a first receiving module 802, a second display module 803, a second receiving module 804, and a video publishing module 805, wherein,
  • the first display module 801 is configured to display a first clipping page of a first target video, and obtain first clipping information of a user performing a first clipping operation on the first target video in the first clipping page, the The first target video is stored in the cloud; the first receiving module 802 is configured to receive a first trigger operation acting on the switching control in the first editing page; the second display module 803 is configured to respond to the first trigger operation to display the publishing page; the second receiving module 804 is configured to receive a second triggering operation acting on the publishing controls in the publishing page; the video publishing module 805 is configured to respond to the second triggering operation, send the The first clip information is sent to the cloud, so that the first target video is synthesized in the cloud into a second target video according to the first clip information and released.
  • the video processing apparatus displays the first clipping page of the first target video stored in the cloud through the first display module 801, and obtains the user's first clipping operation on the first target video on the first clipping page.
  • the first clip information the first receiving module 802 receives the first trigger operation acting on the switch control in the first clip page
  • the second display module 803 displays the release page in response to the first trigger operation
  • the second receiving module 804 After receiving the second trigger operation acting on the publishing controls in the publishing page, the video publishing module 805 sends the acquired first clipping information to the cloud in response to the second triggering operation, so that the cloud can send the first clipping information to the cloud according to the first clipping information.
  • a target video is synthesized into a second target video and published.
  • this embodiment supports the user to perform cloud editing on the uploaded first target video, and when receiving the user's operation of publishing the second target video, the first target video is synthesized into the second target video in the cloud , so that the user does not need to download the first target video to the local for editing or wait for the second target video to be synthesized, and does not need to upload the synthesized second target video again, which can simplify the operations required for the user to modify the video located in the cloud, And reduce the user's waiting time.
  • the video processing apparatus may further include: a video synthesis module, configured to synthesize the video material to be synthesized selected by the user into the first target video before the first editing page of the first target video is displayed , the video material to be synthesized includes a first video material to be synthesized and/or a second video material to be synthesized, the first video material to be synthesized is stored in the cloud, and the second video material to be synthesized is stored locally.
  • a video synthesis module configured to synthesize the video material to be synthesized selected by the user into the first target video before the first editing page of the first target video is displayed
  • the video material to be synthesized includes a first video material to be synthesized and/or a second video material to be synthesized
  • the first video material to be synthesized is stored in the cloud
  • the second video material to be synthesized is stored locally.
  • the video synthesis module may include: a first receiving unit, configured to receive a publishing operation for the first target video; a first synthesizing unit, configured to respond to the publishing operation, requesting the cloud to The selected video material to be synthesized is synthesized into the first target video and released.
  • the video synthesizing module may include: a second receiving unit, configured to receive a saving operation for the first target video; a second synthesizing unit, configured to respond to the saving operation, requesting the cloud to The selected video material to be synthesized is synthesized into the first target video and stored.
  • the video material to be synthesized may include a second video material to be synthesized
  • the video synthesis module may include: a third receiving unit configured to receive an upload operation for the first target video; a time obtaining unit , set to, in response to the uploading operation, obtain the first time length required for uploading the second video material to be synthesized selected by the user to the cloud, and to synthesize the second video material to be synthesized selected by the user into the first the second time length required for the target video and the first target video to be uploaded to the cloud; the first uploading unit is set to, when the first time length is less than or equal to the second time length, The second video material to be synthesized is uploaded to the cloud, so that the uploaded second video material to be synthesized is synthesized into the first target video in the cloud; the second uploading unit is set so that the first time length is greater than the second time length when the second video material to be synthesized is synthesized into a first target video, and the first target video is uploaded to the cloud
  • the video synthesis module may be configured to: display a second editing page of the video material to be synthesized selected by the user, and obtain the user's second editing of the video material to be synthesized in the second editing page. second editing information of the operation; sending the second editing information to the cloud, so that the to-be-synthesized video material is synthesized into the first target video in the cloud according to the second editing information.
  • the video synthesis module may be configured to: display a second editing page of the video material to be synthesized selected by the user, and obtain the user's second editing of the video material to be synthesized in the second editing page.
  • the second clip information of the operation; the to-be-synthesized video material is synthesized into the first target video according to the second clip information.
  • the video processing apparatus provided by the embodiment of the present disclosure can execute the video processing method provided by any embodiment of the present disclosure, and has functional modules and effects corresponding to executing the video processing method.
  • the video processing method provided by any embodiment of the present disclosure can execute the video processing method provided by any embodiment of the present disclosure, and has functional modules and effects corresponding to executing the video processing method.
  • FIG. 9 it shows a schematic structural diagram of an electronic device (eg, a terminal device) 900 suitable for implementing an embodiment of the present disclosure.
  • Terminal devices in the embodiments of the present disclosure may include, but are not limited to, such as mobile phones, notebook computers, digital broadcast receivers, personal digital assistants (Personal Digital Assistants, PDAs), tablet computers (PADs), and portable multimedia players (Portable Media Players). , PMP), mobile terminals such as in-vehicle terminals (eg, in-vehicle navigation terminals), etc., as well as fixed terminals such as digital televisions (Television, TV), desktop computers, and the like.
  • PMP Personal Digital Assistants
  • PMP mobile terminals
  • in-vehicle terminals eg, in-vehicle navigation terminals
  • fixed terminals such as digital televisions (Television, TV), desktop computers, and the like.
  • the electronic device shown in FIG. 9 is only an example, and should not impose any limitation on the function and scope of use
  • the electronic device 900 may include a processing device (such as a central processing unit, a graphics processor, etc.) 901, which may be stored in a read-only memory (Read-Only Memory, ROM) 902 according to a program or from a storage device 908 programs loaded into Random Access Memory (RAM) 903 to perform various appropriate actions and processes.
  • ROM Read-Only Memory
  • RAM Random Access Memory
  • various programs and data required for the operation of the electronic device 900 are also stored.
  • the processing device 901, the ROM 902, and the RAM 903 are connected to each other through a bus 904.
  • An Input/Output (I/O) interface 905 is also connected to the bus 904 .
  • the following devices can be connected to the I/O interface 905: input devices 906 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; including, for example, a Liquid Crystal Display (LCD) Output device 907 , speaker, vibrator, etc.; storage device 908 including, for example, magnetic tape, hard disk, etc.; and communication device 909 .
  • the communication means 909 may allow the electronic device 900 to communicate wirelessly or by wire with other devices to exchange data.
  • FIG. 9 shows an electronic device 900 having various apparatuses, it is not required to implement or have all of the illustrated apparatuses. More or fewer devices may alternatively be implemented or provided.
  • embodiments of the present disclosure include a computer program product comprising a computer program carried on a non-transitory computer readable medium, the computer program containing program code for performing the method illustrated in the flowchart.
  • the computer program may be downloaded and installed from the network via the communication device 909, or from the storage device 908, or from the ROM 902.
  • the processing apparatus 901 the above-mentioned functions defined in the methods of the embodiments of the present disclosure are executed.
  • the computer-readable medium described above in the present disclosure may be a computer-readable signal medium or a computer-readable storage medium, or any combination of the two.
  • a computer-readable storage medium can be, for example, but not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus or device, or a combination of any of the above.
  • Examples of computer-readable storage media may include, but are not limited to, electrical connections with one or more wires, portable computer disks, hard disks, RAM, ROM, Erasable Programmable Read-Only Memory (EPROM) or flash memory), optical fiber, portable compact disk read-only memory (Compact Disc Read-Only Memory, CD-ROM), optical storage device, magnetic storage device, or any suitable combination of the above.
  • a computer-readable storage medium may be any tangible medium that contains or stores a program that can be used by or in conjunction with an instruction execution system, apparatus, or device.
  • a computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave with computer-readable program code embodied thereon. Such propagated data signals may take a variety of forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the foregoing.
  • a computer-readable signal medium can also be any computer-readable medium other than a computer-readable storage medium that can transmit, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device .
  • the program code embodied on the computer-readable medium may be transmitted by any suitable medium, including but not limited to: electric wire, optical fiber cable, radio frequency (RF), etc., or any suitable combination of the above.
  • clients and servers can communicate using any currently known or future developed network protocols, such as HyperText Transfer Protocol (HTTP), and can communicate with digital data in any form or medium.
  • Communication eg, a communication network
  • Examples of communication networks include Local Area Networks (LANs), Wide Area Networks (WANs), the Internet (eg, the Internet), and peer-to-peer networks (eg, ad hoc peer-to-peer networks), as well as any currently Known or future developed networks.
  • LANs Local Area Networks
  • WANs Wide Area Networks
  • the Internet eg, the Internet
  • peer-to-peer networks eg, ad hoc peer-to-peer networks
  • the above-mentioned computer-readable medium may be included in the above-mentioned electronic device; or may exist alone without being assembled into the electronic device.
  • the above-mentioned computer-readable medium carries one or more programs, and when the above-mentioned one or more programs are executed by the electronic device, the electronic device is made to: display the first clip page of the first target video, and obtain the user's information on the first clip page of the first target video.
  • First editing information for performing a first editing operation on the first target video in a editing page, and the first target video is stored in the cloud; receiving a first trigger operation acting on a switching control in the first editing page ; in response to the first triggering operation, displaying a publishing page; receiving a second triggering operation acting on the publishing controls in the publishing page; in response to the second triggering operation, sending the first clipping information to the cloud , so as to synthesize the first target video into a second target video in the cloud according to the first clip information and publish it.
  • Computer program code for carrying out operations of the present disclosure may be written in one or more programming languages, including but not limited to object-oriented programming languages—such as Java, Smalltalk, C++, and This includes conventional procedural programming languages - such as the "C" language or similar programming languages.
  • the program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer, or entirely on the remote computer or server.
  • the remote computer may be connected to the user computer through any kind of network, including a LAN or WAN, or may be connected to an external computer (eg, using an Internet service provider to connect through the Internet).
  • each block in the flowchart or block diagrams may represent a module, segment, or portion of code that contains one or more logical functions for implementing the specified functions executable instructions.
  • the functions noted in the blocks may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved.
  • each block of the block diagrams and/or flowchart illustrations, and combinations of blocks in the block diagrams and/or flowchart illustrations can be implemented in dedicated hardware-based systems that perform the specified functions or operations , or can be implemented in a combination of dedicated hardware and computer instructions.
  • the units involved in the embodiments of the present disclosure may be implemented in a software manner, and may also be implemented in a hardware manner.
  • the name of the module does not constitute a limitation on the unit itself.
  • exemplary types of hardware logic components include: Field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (Application Specific Standard Products) Standard Parts, ASSP), system on chip (System on Chip, SOC), complex programmable logic device (Complex Programmable Logic Device CPLD) and so on.
  • FPGAs Field Programmable Gate Arrays
  • ASICs Application Specific Integrated Circuits
  • ASSP Application Specific Standard Products
  • SOC System on Chip
  • complex programmable logic device Complex Programmable Logic Device CPLD
  • a machine-readable medium may be a tangible medium that may contain or store a program for use by or in connection with the instruction execution system, apparatus or device.
  • the machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium.
  • Machine-readable media may include, but are not limited to, electronic, magnetic, optical, electromagnetic, infrared, or semiconductor systems, devices, or devices, or any suitable combination of the foregoing. Examples of machine-readable storage media would include one or more wire-based electrical connections, portable computer disks, hard disks, RAM, ROM, EPROM or flash memory, optical fibers, CD-ROMs, optical storage devices, magnetic storage devices, or Any suitable combination of the above.
  • Example 1 provides a video processing method, including:
  • Example 2 according to the method of Example 1, before the displaying the first clip page of the first target video, further comprising:
  • Synthesize the video material to be synthesized selected by the user into the first target video wherein the video material to be synthesized includes the first video material to be synthesized and/or the second video material to be synthesized, and the first video material to be synthesized It is stored in the cloud, and the second video material to be synthesized is stored locally.
  • Example 3 According to the method described in Example 2, the synthesizing the video material to be synthesized selected by the user into the first target video includes:
  • the cloud is requested to synthesize the video material to be synthesized selected by the user into the first target video and publish it.
  • Example 4 According to the method described in Example 2, the synthesizing the video material to be synthesized selected by the user into the first target video includes:
  • the cloud is requested to synthesize and store the video material to be synthesized selected by the user into the first target video.
  • Example 5 According to the method described in Example 2, the video material to be synthesized includes a second video material to be synthesized, and the video material to be synthesized selected by the user is synthesized into the first video material to be synthesized.
  • a target video including:
  • first time length is less than or equal to the second time length, upload the second video material to be synthesized to the cloud, so that the uploaded second video material to be synthesized is uploaded in the cloud synthesizing into the first target video;
  • the second video material to be synthesized is synthesized into the first target video, and the first target video is uploaded to the cloud.
  • Example 6 According to the method described in Example 2, the synthesizing the video material to be synthesized selected by the user into the first target video includes:
  • the second clip information is sent to the cloud, so that the to-be-combined video material is synthesized into the first target video in the cloud according to the second clip information.
  • Example 7 According to the method described in Example 2, the synthesizing the video material to be synthesized selected by the user into the first target video includes:
  • the to-be-synthesized video material is synthesized into the first target video according to the second clip information.
  • Example 8 provides a video processing apparatus, including:
  • the first display module is configured to display the first clipping page of the first target video, and obtain the first clipping information of the user performing the first clipping operation on the first target video in the first clipping page, wherein the The first target video is stored in the cloud;
  • a first receiving module configured to receive a first trigger operation acting on a switching control in the first clipping page
  • a second display module configured to display a publishing page in response to the first trigger operation
  • a second receiving module configured to receive a second trigger operation acting on the publishing control in the publishing page
  • a video publishing module configured to send the first clip information to the cloud in response to the second trigger operation, so that the cloud can synthesize the first target video into a first clip according to the first clip information.
  • Two target videos and publish them.
  • Example 9 provides an electronic device, comprising:
  • processors one or more processors
  • memory arranged to store one or more programs
  • the one or more processors When the one or more programs are executed by the one or more processors, the one or more processors implement the video processing method as described in any one of Examples 1-7.
  • Example 10 provides a computer-readable storage medium having stored thereon a computer program that, when executed by a processor, implements the video as described in any of Examples 1-7 processing method.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • Business, Economics & Management (AREA)
  • Marketing (AREA)
  • Databases & Information Systems (AREA)
  • Television Signal Processing For Recording (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

本文公开了一种视频的处理方法、装置、电子设备和存储介质。该视频的处理方法包括:显示第一目标视频的第一剪辑页面,并获取用户在所述第一剪辑页面中对所述第一目标视频进行第一剪辑操作的第一剪辑信息,所述第一目标视频存储于云端;接收作用于所述第一剪辑页面中的切换控件的第一触发操作;响应于所述第一触发操作,显示发布页面;接收作用于所述发布页面中的发布控件的第二触发操作;响应于所述第二触发操作,将所述第一剪辑信息发送至所述云端,以在所述云端根据所述第一剪辑信息将所述第一目标视频合成为第二目标视频并进行发布。

Description

视频的处理方法、装置、电子设备和存储介质
本申请要求在2021年03月15日提交中国专利局、申请号为202110278694.5的中国专利申请的优先权,该申请的全部内容通过引用结合在本申请中。
技术领域
本公开涉及计算机技术领域,例如涉及一种视频的处理方法、装置、电子设备和存储介质。
背景技术
用户可以在本地通过视频编辑器对视频进行编辑与合成,并将合成后的视频上传到视频平台的云端进行发布。
发明内容
本公开提供一种视频的处理方法、装置、电子设备和存储介质,以简化修改已上传的视频所需的操作,减少用户的等待时间。
本公开提供了一种视频的处理方法,包括:
显示第一目标视频的第一剪辑页面,并获取用户在所述第一剪辑页面中对所述第一目标视频进行第一剪辑操作的第一剪辑信息,其中,所述第一目标视频存储于云端;
接收作用于所述第一剪辑页面中的切换控件的第一触发操作;
响应于所述第一触发操作,显示发布页面;
接收作用于所述发布页面中的发布控件的第二触发操作;
响应于所述第二触发操作,将所述第一剪辑信息发送至所述云端,以在所述云端根据所述第一剪辑信息将所述第一目标视频合成为第二目标视频并进行发布。
本公开还提供了一种视频的处理装置,包括:
第一显示模块,设置为显示第一目标视频的第一剪辑页面,并获取用户在所述第一剪辑页面中对所述第一目标视频进行第一剪辑操作的第一剪辑信息,其中,所述第一目标视频存储于云端;
第一接收模块,设置为接收作用于所述第一剪辑页面中的切换控件的第一触发操作;
第二显示模块,设置为响应于所述第一触发操作,显示发布页面;
第二接收模块,设置为接收作用于所述发布页面中的发布控件的第二触发操作;
视频发布模块,设置为响应于所述第二触发操作,将所述第一剪辑信息发送至所述云端,以在所述云端根据所述第一剪辑信息将所述第一目标视频合成为第二目标视频并进行发布。
本公开还提供了一种电子设备,包括:
一个或多个处理器;
存储器,设置为存储一个或多个程序;
当所述一个或多个程序被所述一个或多个处理器执行,使得所述一个或多个处理器实现上述的视频的处理方法。
本公开还提供了一种计算机可读存储介质,其上存储有计算机程序,该程序被处理器执行时实现上述的视频的处理方法。
附图说明
图1为本公开实施例提供的一种视频的处理方法的流程示意图;
图2为本公开实施例提供的一种第一剪辑页面示意图;
图3为本公开实施例提供的一种发布页面示意图;
图4为本公开实施例提供的一种发布信息页面示意图;
图5为本公开实施例提供的另一种视频的处理方法的流程示意图;
图6为本公开实施例提供的一种第二剪辑页面示意图;
图7为本公开实施例提供的一种上传页面示意图;
图8为本公开实施例提供的一种视频的处理装置的结构框图;
图9为本公开实施例提供的一种电子设备的结构示意图。
具体实施方式
下面将参照附图描述本公开的实施例。虽然附图中显示了本公开的一些实施例,然而,本公开可以通过多种形式来实现,提供这些实施例是为了理解本公开。本公开的附图及实施例仅用于示例性作用。
本公开的方法实施方式中记载的多个步骤可以按照不同的顺序执行,和/或并行执行。此外,方法实施方式可以包括附加的步骤和/或省略执行示出的步骤。 本公开的范围在此方面不受限制。
本文使用的术语“包括”及其变形是开放性包括,即“包括但不限于”。术语“基于”是“至少部分地基于”。术语“一个实施例”表示“至少一个实施例”;术语“另一实施例”表示“至少一个另外的实施例”;术语“一些实施例”表示“至少一些实施例”。其他术语的相关定义将在下文描述中给出。
本公开中提及的“第一”、“第二”等概念仅用于对不同的装置、模块或单元进行区分,并非用于限定这些装置、模块或单元所执行的功能的顺序或者相互依存关系。
本公开中提及的“一个”、“多个”的修饰是示意性而非限制性的,本领域技术人员应当理解,除非在上下文另有指出,否则应该理解为“一个或多个”。
本公开实施方式中的多个装置之间所交互的消息或者信息的名称仅用于说明性的目的,而并不是用于对这些消息或信息的范围进行限制。
相关技术中,当用户对上传到视频平台的云端的视频不满意时,需要首先在本地通过视频编辑器对视频进行修改,并在修改完成后再次对修改后的视频进行合成并上传到视频平台的云端,然后通过用户执行发布操作将视频发布到视频平台。然而,视频发布方案中,视频合成并上传完成后用户才能执行发布操作发布视频,需要等待较长的时间,导致用户体验不佳。
图1为本公开实施例提供的一种视频的处理方法的流程示意图。该方法可以由视频的处理装置执行,其中,该装置可以由软件和/或硬件实现,可配置于电子设备中,例如,该装置可以配置在手机或平板电脑中。本公开实施例提供的视频的处理方法适用于对已上传的视频进行剪辑与发布的场景。如图1所示,本实施例提供的视频的处理方法可以包括:
S101、显示第一目标视频的第一剪辑页面,并获取用户在所述第一剪辑页面中对所述第一目标视频进行第一剪辑操作的第一剪辑信息,所述第一目标视频存储于云端。
第一目标视频可以理解为用户上传至云端的视频,如待合成的一视频素材或者已合成的视频,当第一目标视频为已合成的视频时,该第一目标视频可以为用户在本地合成并上传到云端的视频或者用户在云端合成的视频,其可以为已发布或尚未发布的视频。相应的,第一剪辑页面可以用于供用户对第一目标视频进行剪辑的剪辑页面;第一剪辑操作可以为用户在第一剪辑页面中对第一目标视频所进行的剪辑操作;第一剪辑信息可以为用户在第一剪辑页面中所执行的第一剪辑操作的信息。换言之,电子设备可以通过第一剪辑页面来获取用 户拟(期望)针对第一目标视频所进行的剪辑操作的第一剪辑信息,但是,与相关技术不同,本实施例中,电子设备在获取第一剪辑信息之后,并不在本地根据第一剪辑信息对第一目标视频进行剪辑(包括并不合成第二目标视频),而是将第一剪辑信息发送至云端,以在云端根据第一剪辑信息对第一目标视频进行剪辑(包括将第一目标视频合成为第二目标视频)。
图2为本实施例提供的一种第一剪辑页面示意图。如图2所示,第一剪辑页面中可以设置有控件区域20以及主显示区域21,控件区域20中可以设置有素材库控件、配乐控件、文字控件、贴纸控件、字幕控件、滤镜控件、转场控件和/或特效控件等控件,主显示区域21内可以设置有第一区域210、预览区域211、第二区域212和剪辑区域213。从而,用户可以通过触发控件区域20中的素材库控件/配乐控件/文字控件/贴纸控件/滤镜控件/转场控件/特效控件指示电子设备在第一区域210显示可供用户选择的视频素材(如显示开发商提供的在线视频素材和/或用户导入的本地视频素材)/配乐/文字样式/贴纸/滤镜/转场视频/特效,并在第二区域中显示视频素材/配乐/文字样式/贴纸/滤镜/转场视频/特效的编辑控件,以供用户进行编辑,或者,通过触发字幕控件以在第一区域中输入视频的字幕;或者,通过剪辑区域213对第一目标视频和/或新添加的视频素材进行剪辑,如通过拖动视频轨道的左右边界剪辑视频等等,并可以在预览区域212查看待合成的第二目标视频的预览效果。
在本实施例中,用户在将第一目标视频上传至云端后,或者,用户在发布第一目标视频后,若对已上传或已发布的第一目标视频不满意,可以指示电子设备显示第一目标视频的第一剪辑页面,并通过第一剪辑页面对第一目标视频进行修改。
示例性的,如图3所示,若第一目标视频为用户已经上传至云端但未发布的视频,用户在欲对第一目标视频进行修改时,可以控制电子设备显示第一目标视频的发布页面,通过触发该发布页面中的第一修改视频控件30指示电子设备显示第一修改方式选择窗口31,并触发该第一修改方式选择窗口中的第一在线修改控件310;相应的,电子设备在监测到用户触发第一在线修改控件310时,可以将当前显示页面由第一目标视频的发布页面切换为第一目标视频的第一剪辑页面,如图2所示;从而,用户可以在第一剪辑页面中对第一目标视频进行修改。此外,如图3所示,第一目标视频的发布页面中还可以显示有第一目标视频在发布时的基本信息,如第一目标视频的标题、封面、简介、类型、参与活动和/或内容同步信息等,以供用户进行查看与编辑;第一修改方式选择窗口31中还可以设置有第一重新上传控件311,从而,用户可以通过触发该第一重新上传控件311重新上传新的第一目标视频。
如图4所示,若第一目标视频为用户已发布的视频,用户在欲对第一目标视频进行修改时,可以控制电子设备显示第一目标视频的发布信息页面(即发布详情页面),通过触发该发布信息页面中的第二修改视频控件40指示电子设备显示第二修改方式选择窗口41,并触发该第二修改方式选择窗口41中的第二在线修改控件410;相应的,电子设备在监测到用户触发第二在线修改控件410时,可以将当前显示页面由第一目标视频的发布信息页面切换为第一目标视频的第一剪辑页面,如图2所示;从而,用户可以在第一剪辑页面中对第一目标视频进行修改。此外,如图4所示,第一目标视频的发布信息页面中还可以显示有第一目标视频发布时的基本信息、第一目标视频发布时的原始标题以及第一目标视频的当前视频标题;第二修改方式选择窗口41中还可以设置有第二重新上传控件411,从而,用户可以通过触发该第二重新上传控件411重新上传新的第一目标视频。
在一个实施方式中,在所述显示第一目标视频的第一剪辑页面之前,还包括:将用户选择的待合成视频素材合成为所述第一目标视频,所述待合成视频素材包括第一待合成视频素材和/或第二待合成视频素材,所述第一待合成视频素材存储于云端,所述第二待合成视频素材存储于本地。
待合成视频素材可以理解为用户选择的、欲对其进行合成的视频素材,其可以包括存储于云端的第一待合成视频素材和/或存储于本地的第二待合成视频素材。
在上述实施方式,第一目标视频可以为用户合成的视频。相应的,用户可以基于本地的视频素材和/或云端存储的视频素材合成第一目标视频,例如,用户可以在本地合成第一目标视频并将其上传到云端,或者,将本地的第二待合成视频素材上传至云端并基于上传后的第二待合成视频素材和/或开发商提供的第一待合成视频素材在云端合成第一目标视频等,本实施例不对此进行限制。
S102、接收作用于所述第一剪辑页面中的切换控件的第一触发操作。
S103、响应于所述第一触发操作,显示发布页面。
第一剪辑页面中的切换控件可以理解为第一剪辑页面中所设置的、用于供用户触发以指示电子设备将当前显示页面由第一剪辑页面切换为发布页面的控件。相应的,第一触发操作可以为任一触发第一剪辑页面中的切换控件的操作,如作用于第一剪辑页面中的切换控件的点击操作等;该发布页面可以为供用户执行第二触发操作以指示电子设备通过云端合成并发布第二目标视频的页面,即电子设备上由第一剪辑页面切换后的页面,在用户触发该发布页面中的发布控件时,电子设备可以将第一剪辑信息发送至云端,请求云端进行视频合成与发布。
如图2所示,电子设备显示第一目标视频的第一剪辑页面;用户在第一剪辑页面中对第一目标视频进行剪辑操作,并在欲发布执行剪辑操作后的第一目标视频时,触发第一剪辑页面中的切换控件23;电子设备在监测到用户触发第一剪辑页面中的切换控件23时,确定接收到第一触发操作,并将当前显示页面有第一剪辑页面切换为发布页面(与第一目标视频的发布页面类似);从而,用户可以在发布页面中编辑待合成的第二目标视频发布时的基本信息等发布信息,并可以在编辑完成后通过触发发布页面中的发布控件指示电子设备在云端合成并发布第二目标视频。
S104、接收作用于所述发布页面中的发布控件的第二触发操作。
发布页面中的发布控件可用于供用户触发以指示电子设备在云端将第一目标视频合成为第二目标视频并进行发布的操作;第二触发操作可以为触发发布页面中的发布控件的操作,如点击发布页面中的发布控件的操作等。
在本实施例中,用户可以通过触发发布页面中的发布控件指示电子设备通过云端合成并发布第二目标视频。示例性的,电子设备显示发布页面,用户在欲发布第二目标视频时,可以触发发布页面中的发布控件;相应的,电子设备在监测到用户触发发布控件时,可以确定接收到针对第二目标视频的第一发布操作。
S105、响应于所述第二触发操作,将所述第一剪辑信息发送至云端,以在云端根据所述第一剪辑信息将所述第一目标视频合成为第二目标视频并进行发布。
第二目标视频可以为根据用户对第一目标视频的第一剪辑信息对第一目标视频进行处理后得到的视频。
在本实施例中,用户可以通过云剪辑的方式对第一目标视频进行剪辑,通过电子设备(本地终端)获取用户输入的第一剪辑信息,此时,电子设备可以不执行与第一剪辑信息对应的剪辑合成操作,而是在接收到用户触发发布页面中的发布控件的触发操作时,将第一剪辑信息发送至云端,由云端来进行剪辑合成,从而,用户输入对第一目标视频的剪辑操作,无需等待视频合成即可通过执行视频发布操作,在执行视频发布操作后即可以根据需要进行页面切换,无需停留在发布页面等待视频合成,也无需再次上传合成后的视频,能够简化视频剪辑与发布过程所需的操作,减少用户的等待时间,提高用户体验。
电子设备在接收到作用于发布页面中的发布控件的第二触发操作时,可以首先将所获取的用户在第一剪辑页面中对第一目标视频进行第一剪辑操作的第一剪辑信息发送给云端。相应的,云端在接收到电子设备发送的第一剪辑信息 后,可以根据该第一剪辑信息将第一目标视频合成为第二目标视频,如将第一目标视频与用户新添加的视频素材、配乐、贴纸、字幕、滤镜和/或转场视频等进行合成,得到第二目标视频,并发布该第二目标视频,如直接将第二目标视频发布到当前的视频平台,或者,将第二目标视频发布到当前的视频平台以及用户选择的其他视频平台等。
本实施例提供的视频的处理方法,显示云端存储的第一目标视频的第一剪辑页面,并获取用户在该第一剪辑页面中对第一目标视频进行第一剪辑操作的第一剪辑信息,接收作用于第一剪辑页面中的切换控件的第一触发操作,响应于该第一触发操作,显示发布页面,接收作用于发布页面中的发布控件的第二触发操作,响应于该第二触发操作,将所获取的第一剪辑信息发送至云端,以在云端根据该第一剪辑信息将第一目标视频合成为第二目标视频并进行发布。本实施例通过采用上述技术方案,支持用户对已上传的第一目标视频进行云剪辑,并在接收到用户发布第二目标视频的操作时,在云端将第一目标视频合成为第二目标视频,从而,无需用户将第一目标视频下载到本地进行剪辑或等待第二目标视频合成,也无需再次上传合成的第二目标视频,能够简化用户对位于云端的视频进行修改时所需的操作,并减少用户的等待时间。
图5为本公开实施例提供的另一种视频的处理方法的流程示意图。本实施例中的方案可以与上述实施例中的一个或多个可选方案组合。可选的,所述将用户选择的待合成视频素材合成为所述第一目标视频,包括:接收针对所述第一目标视频的发布操作;响应于所述发布操作,请求云端将用户选择的待合成视频素材合成为所述第一目标视频并进行发布。
可选的,所述将用户选择的待合成视频素材合成为所述第一目标视频,包括:接收针对所述第一目标视频的保存操作;响应于所述保存操作,请求云端将用户选择的待合成视频素材合成为所述第一目标视频并进行存储。
可选的,所述待合成视频素材包括第二待合成视频素材,所述将用户选择的待合成视频素材合成为所述第一目标视频,包括:接收针对所述第一目标视频的上传操作;响应于所述上传操作,获取将用户选择的第二待合成视频素材上传至云端所需的第一时间长度,以及,将用户选择的第二待合成视频素材合成为所述第一目标视频并将所述第一目标视频上传至云端所需的第二时间长度;如果所述第一时间长度小于或等于所述第二时间长度,则将所述第二待合成视频素材上传至云端,以在云端将上传后的第二待合成视频素材合成为第一目标视频;如果所述第一时间长度大于所述第二时间长度,则将所述第二待合成视频素材合成为第一目标视频,并将所述第一目标视频上传至云端。
相应的,如图5所示,本实施例提供的视频的处理方法可以包括:
S201、接收针对第一目标视频的发布操作。
S202、响应于所述发布操作,请求云端将用户选择的待合成视频素材合成为所述第一目标视频并进行发布,执行S209,所述待合成视频素材包括第一待合成视频素材和/或第二待合成视频素材,所述第一待合成视频素材存储于云端,所述第二待合成视频素材存储于本地。
针对第一目标视频的发布操作可以为发布第一目标视频的操作,如作用于第一目标视频的发布页面中的发布控件的触发操作,如点击第一目标视频的发布页面中的发布控件的操作。
在本实施例中,电子设备可以基于用户的相应剪辑操作对用户所选择的视频素材(如开发商提供的在线视频素材和/或用户上传至云端的本地视频素材)进行在线剪辑,在用户发布第一目标视频时再通过云端根据用户的剪辑信息将至少一个视频素材合成为第一目标视频并进行发布,从而无需执行将第一目标视频上传至云端的操作,进而也无需用户等待第一目标视频上传;并且,在用户剪辑完成后即显示第一目标视频的发布页面,而并非将用户剪辑后的至少一个待合成视频素材合成为第一目标视频后再显示第一目标视频的发布页面,使得用户在对每个待合成视频素材剪辑完成后即可执行发布第一目标视频额操作,无需等待第一目标视频的合成,由此,能够大大减少用户在制作与发布视频时的等待时间,提高用户制作与发布视频的积极性。
用户在对每个待合成视频素材剪辑完成后欲发布第一目标视频时,可以指示电子显示第一目标视频的发布页面,如图3所示,并触发该发布页面中的发布控件32;相应的,电子设备在监测到用户触发发布页面中的发布控件32时,可以确定接收到针对第一目标视频的发布操作,并响应于该发布操作,请求云端将至少一个待合成视频素材合成为第一目标视频并发布该第一目标视频。
S203、接收针对所述第一目标视频的保存操作。
S204、响应于所述保存操作,请求云端将用户选择的第一待合成视频素材合成为所述第一目标视频并进行存储,执行S209。
针对第一目标视频的保存操作可以理解为在云端合成并保存第一目标视频的触发操作,如点击第一目标视频的保存控件的操作,该保存控件可以在用户对每个视频素材进行了剪辑操作但未发布第一目标视频时进行显示,如在用户未执行针对第一目标视频的发布操作但欲关闭第一目标视频的发布页面(如将当前显示的第一目标视频的发布页面切换为其他页面)时进行显示和/或在用户欲关闭待合成视频素材的第二剪辑页面时进行显示。
以在第一目标视频的发布页面中显示第一目标视频的保存控件为例,如图3所示,电子设备显示第一目标视频的发布页面。用户在不欲发布第一目标视频时,可以触发该发布页面中的关闭控件(图3中未示出)或者触发该发布页面中用于切换至其他页面的页面控件(图3中未示出)。电子设备在监测到用户触发第一目标视频的发布页面中的关闭控件/页面控件时,显示保存提示窗口,以提示用户保存第一目标视频,并在监测到用户点击保存提示窗口中的保存控件时,关闭第一目标视频的发布页面或者将第一目标视频的发布页面切换为用户所触发的页面控件对应的页面,并请求云端将用户选择的待合成视频素材合成为第一目标视频并进行存储;在监测到用户点击保存提示窗口中的不保存控件时,直接关闭第一目标视频的发布页面或者将第一目标视频的发布页面切换为用户所触发的页面控件对应的页面。
电子设备也可以在用户关闭待合成视频素材的第二剪辑页面、在用户关闭第一目标视频的发布页面或者在将第一目标视频的发布页面切换为其他页面时,自动请求云端将至少一个待合成视频素材合成为第一目标视频并进行保存,无需用户对第一目标视频进行保存,从而简化用户所需执行的操作。
S205、接收针对所述第一目标视频的上传操作。
S206、响应于所述上传操作,获取将用户选择的第二待合成视频素材上传至云端所需的第一时间长度,以及,将用户选择的第二待合成视频素材合成为所述第一目标视频并将所述第一目标视频上传至云端所需的第二时间长度,执行S207或S208,所述第二待合成视频素材存储于本地。
上传操作可以理解为上传第一目标视频的操作,如显示第一目标视频的发布页面的触发操作或者发布第一目标视频的触发操作等。第二待合成视频素材可以理解为用户在本地对其进行剪辑且欲采用其制作第一目标视频的本地视频素材。第一时间长度可以为直接将至少一个第二待合成视频素材(以及用户的剪辑信息)上传至云端所需的时间长度,第二时间长度可以为在本地将至少一个第二待合成视频素材合成为第一目标视频并将合成得到的第一目标视频上传至云端所需的时间长度,第一时间长度和第二时间长度可以根据每个视频素材的大小以及当前时刻的网速等确定。
在一个实施方式中,电子设备可以在接收到显示第一目标视频的发布页面的触发操作时即上传第一目标视频。实例性的,电子设备显示存储于本地的第二待合成视频素材的第二剪辑页面;用户在该第二剪辑页面中对每个第二待合成视频素材进行剪辑,并在剪辑完成欲发布第一目标视频时,触发第二剪辑页面中的切换控件;相应的,电子设备在监测到用户触发第二剪辑页面中的切换控件时,确定接收到针对第一目标视频的上传操作,将当前显示页面由第二剪 辑页面切换为第一目标视频的发布页面,并获取将用户的剪辑信息以及至少一个第二待合成视频上传至云端所需的第一时间长度,以及,在本地将至少一个第二待合成视频素材合成为第一目标视频并将该第一目标视频上传至云端所需的第二时间长度,以便采用所需时间长度较短的上传方式上传第一目标视频。
在另一个实施方式中,电子设备也可以在接收到发布第一目标视频的触发操作时,再上传第二目标视频。示例性的,电子设备显示存储于本地的第二待合成视频素材的第二剪辑页面;用户在该第二剪辑页面中对每个第二待合成视频素材进行剪辑,并在剪辑完成后,触发第二剪辑页面中的切换控件;电子设备在监测到用户触发第二剪辑页面中的切换控件时,确定接收到针对第一目标视频的上传操作,将当前显示页面由第二剪辑页面切换为第一目标视频的发布页面;从而,用户可以在第一目标视频的发布页面中编辑第一目标视频的发布信息,并在编辑完成后触发该发布页面中的发布控件;相应的,电子设备在监测到用户触发第一目标视频的发布页面中的发布控件时,确定接收到针对第一目标视频的上传操作,获取将用户的剪辑信息以及至少一个第二待合成视频素材上传至云端所需的第一时间长度,以及,在本地将至少一个第二待合成视频素材合成为第一目标视频并将该第一目标视频上传至云端所需的第二时间长度,以便采用所需时间长度较短的上传方式上传第一目标视频,并在第一目标视频上传完成后,发布第一目标视频。
S207、如果所述第一时间长度小于或等于所述第二时间长度,则将所述第二待合成视频素材上传至云端,以在云端将上传后的第二待合成视频素材合成为第一目标视频,执行S209。
S208、如果所述第一时间长度大于所述第二时间长度,则将所述第二待合成视频素材合成为第一目标视频,并将所述第一目标视频上传至云端。
以上传操作为作用于第一目标视频的发布页面中的发布控件的触发操作为例,电子设备在获取到第一时间长度和第二时间长度后,可以判断第一时间长度与第二时间长度的相对大小,如果第一时间长度小于或等于第二时间长度,则可以将每个第二待合成视频素材上传至云端,请求云端将至少一个第二待合成视频素材合成为第一目标视频(即先上传再合成)并进行发布;如果第一时间长度大于第二时间长度,则可以在本地将至少一个第二待合成视频素材合成为第一目标视频,将合成得到的第一目标视频上传至云端并请求云端发布第一目标视频(即先合成再上传)。
当先上传再合成的方式包括多种子上传方式(如分片上传和多线程上传等)时,可以采用所需时间长度最短的子上传方式上传每个第二待合成视频素材,相应的,第一时间长度为该多种子上传方式中所需时间最短的子上传方式所需 的时间长度;当先合成再上传的方式包括多种子上传方式(如边合成边上传、压缩后再合成上传和分片后再合成上传等)时,可以采用所需时间长度最短的子上传方式合成并上传第一目标视频,相应的,第二时间长度为该多种子上传方式中所需时间最短的子上传方式所需的时间长度。
在本实施例中,用户也可以在本地对本地的第二待合成视频素材进行剪辑,并在剪辑完成后,再将第二目标视频上传至云端,相应的,电子设备在监测到用户的上传操作时,可以采用所需时间长度最短的上传方式上传第一目标视频,从而减少上传所需的时间。
S209、显示第一目标视频的第一剪辑页面,并获取用户在所述第一剪辑页面中对所述第一目标视频进行第一剪辑操作的第一剪辑信息,所述第一目标视频存储于云端。
S210、接收作用于所述第一剪辑页面中的切换控件的第一触发操作。
S211、响应于所述第一触发操作,显示发布页面。
S212、接收作用于所述发布页面中的发布控件的第二触发操作。
S213、响应于所述第二触发操作,将所述第一剪辑信息发送至云端,以在云端根据所述第一剪辑信息将所述第一目标视频合成为第二目标视频并进行发布。
在一个实施方式中,所述将用户选择的待合成视频素材合成为所述第一目标视频,包括:显示用户选择的待合成视频素材的第二剪辑页面,并获取用户在所述第二剪辑页面中对所述待合成视频素材进行第二剪辑操作的第二剪辑信息;将所述第二剪辑信息发送至云端,以在云端根据所述第二剪辑信息将所述待合成视频素材合成为所述第一目标视频。
第二剪辑页面可以为用于供用户选择并剪辑生成第一目标视频所需的待合成视频素材(包括第一待合成视频素材和/或第二待合成视频素材)的页面,如图6所示,第二剪辑页面中多个区域所显示的内容以及多个控件的功能与第一剪辑页面类似,此处不再进行详细描述;第二剪辑操作可以为用户在第二剪辑页面中对每个待合成视频素材所进行的剪辑操作;第二剪辑信息可以为用户在第二剪辑页面中所执行的第二剪辑操作信息。
在上述实施方式中,电子设备可以通过云端合成第一目标视频。如图6所示,电子设备显示待合成视频素材的第二剪辑页面。从而,用户可以在剪辑页面中对待合成视频素材进行剪辑,并在剪辑完成后触发第二剪辑页面中的切换控件60。相应的,电子设备记录用户在第二剪辑页面中对每个待合成视频素材的第二剪辑信息,并在监测到用户触发第二剪辑页面中的切换控件60时,显示 第一目标视频的发布页面,如图3所示。进而,用户在欲发布第一目标视频时,可以触发第一目标视频的发布页面中的发布控件32。电子设备在监测到用户触发第一目标视频的发布页面中的发布控件32时,可以将第二剪辑信息发送至云端,以在云端根据该第二剪辑信息将该待合成视频素材合成为第一目标视频并进行发布。
在另一个实施方式中,所述将用户选择的待合成视频素材合成为所述第一目标视频,包括:显示用户选择的待合成视频素材的第二剪辑页面,并获取用户在所述第二剪辑页面中对所述待合成视频素材进行第二剪辑操作的第二剪辑信息;根据所述第二剪辑信息将所述待合成视频素材合成为所述第一目标视频。
在上述实施方式中,电子设备可以合成第一目标视频。如图6所示,电子设备显示待合成视频素材的第二剪辑页面。从而,用户可以在剪辑页面中对待合成视频素材进行剪辑,并在剪辑完成后触发第二剪辑页面中的切换控件60。相应的,电子设备记录用户在第二剪辑页面中对每个待合成视频素材的第二剪辑信息,并在监测到用户触发第二剪辑页面中的切换控件60时,显示第一目标视频的发布页面,如图3所示。进而,用户在欲发布第一目标视频时,可以触发第一目标视频的发布页面中的发布控件32。电子设备在监测到用户触发第一目标视频的发布页面中的发布控件32时,可以根据该第二剪辑信息将该待合成视频素材合成为第一目标视频,并将第一目标视频发送至云端,以在云端发布该第一目标视频。
在一个示例性的场景中,用户可以基于本地和/或云端存储的一个或多个视频素材在线生成视频。示例性的,如图7所示,电子设备显示上传页面。用户在欲基于本地和/或云端存储的视频素材在线合成视频时,可以触发上传页面中的在线制作控件70。电子设备在监测到用户触发该在线制作控件70时,将当前显示页面由上传页面切换为第二剪辑页面,如图6所示;从而,用户可以在第二剪辑页面中导入和/或选择待合成视频素材,对待合成视频素材进行剪辑,并在剪辑完成后,触发第二剪辑页面中的切换控件60。电子设备在监测到用户触发第二剪辑页面中的切换控件60时,将当前显示页面由第二剪辑页面切换为第一目标视频的发布页面,如图3所示。进而,用户可以在该发布页面中编辑第一目标视频的发布信息,通过触发该发布页面的第一修改方式选择窗口31中第一在线修改控件310指示电子设备将当前显示页面重新切换回第二剪辑页面,通过触发该发布页面中的发布控件32指示电子设备请求云端发布第一目标视频,通过触发该发布页面中的其他页面的页面控件或关闭控件指示电子设备切出该发布页面。相应的,电子设备在监测到用户触发第一在线修改控件310时,可以将当前显示页面由第一目标视频的发布页面切换为第二剪辑页面;在监测到用户触发第一目标视频的发布页面中的发布控件32时,将用户在第二剪辑页面 中的第二剪辑信息发送给云端,以在云端根据该第二剪辑信息将至少一个待合成视频素材合成为第一目标视频并进行发布;在监测到用户触发第一目标视频的发布页面中的页面控件或关闭控件时,执行相应的页面切换操作或关闭第一目标视频的发布页面,并可以将用户在第二剪辑页面中的第二剪辑信息发送给云端,以在云端根据该第二剪辑信息将至少一个待合成视频素材合成为第一目标视频并进行存储。
在另一个示例性的场景中,用户可以将本地的一个视频上传到云端进行剪辑和/或发布。示例性的,如图7所示,电子设备显示电子设备显示上传页面。用户在欲上传本地的一视频/视频素材时,可以将该视频/视频素材拖动至上传区域71或者点击上传区域71以选择本地的视频/视频素材。电子设备在监测到用户的拖动操作或选择操作时,将用户拖动或选择的视频/视频素材作为第一目标视频,将第一目标视频上传至云端,并将当前显示页面由上传页面切换为第一目标视频的发布页面,如图3所示。从而,用户可以在该发布页面中编辑第一目标视频的发布信息,并在欲发布第一目标视频时触发第一目标视频的发布页面中的发布控件32;相应的,电子设备在监测到用户触发第一目标视频的发布页面中的发布控件32时,可以请求云端发布第一目标视频。或者,用户在欲对第一目标视频进行剪辑时,可以触发第一目标视频的发布页面的第一修改方式选择窗口31中第一在线修改控件310指示电子设备将当前显示页面由该发布页面切换为第一目标视频的第一剪辑页面,如图2所示,在第一剪辑页面中对第一目标视频进行剪辑,并在剪辑完成后,触发第一剪辑页面中的切换控件23;电子设备在监测到用户触发第一在线修改控件310时,可以将当前显示页面由第一目标视频的发布页面切换为第一剪辑页面,并在监测到用户触发第一剪辑页面中的切换控件23时,将当前显示页面由第一剪辑页切换为第二目标视频的发布页面;用户可以在第二目标视频的发布页面编辑第二目标视频的发布信息,并在编辑完成后,触发第二目标视频的发布页面中的发布控件;相应的,电子设备在监测到用户触发第二目标视频的发布页面中的发布控件时,可以将用户的在第一剪辑页面中的第一剪辑信息发送至云端,以在云端根据该第一剪辑信息将第一目标视频合成为第二目标视频并进行发布。
在又一个示例性的场景中,用户可以在本地进行视频剪辑。示例性的,用户在本地对本地的待合成视频素材进行剪辑,并在剪辑完成后,通过相应的触发操作指示电子设备显示第一目标视频的发布页面。相应的,电子设备在监测到显示第一目标视频的发布页面的触发操作时,可以显示第一目标视频的发布页面,如图3所示,并获取采用每种上传方式上传第一目标视频时所需的时间长度,选取所需时间长度最短的上传方式上传与合成(先合成再上传或者先上传再合成)第一目标视频,并在监测到用户触发第一目标视频的发布页面中的 发布控件32时,请求云端发布第一目标视频;或者,电子设备在监测到显示第一目标视频的发布页面的触发操作时,显示第一目标视频的发布页面,并在监测到用户触第一目标视频的发布页面中的发布控件32时,获取采用每种上传方式上传第一目标视频时所需的时间长度,选取所需时间长度最短的上传方式上传与合成(先合成再上传或者先上传再合成)第一目标视频,并请求云端发布第一目标视频。
在本实施例中,电子设备可以基于用户的触发操作在线剪辑每个待合成视频素材,并在发布第一目标视频时,再将至少一个待合成视频素材合成为第一目标视频进行发布;也可以基于用户的触发操作在编辑剪辑视频,并在剪辑完成后,选择所需时间最短的上传方式上传与合成第一目标视频,并发布第一目标视频,从而,能够减少制作与发布视频时所需等待的时间,提高用户的体验。
图8为本公开实施例提供的一种视频的处理装置的结构框图。该装置可以由软件和/或硬件实现,可配置于电子设备中,例如,该装置可以配置在手机或平板电脑中,可通过执行视频的处理方法对视频进行处理。如图8所示,本实施例提供的视频的处理装置可以包括:第一显示模块801、第一接收模块802、第二显示模块803、第二接收模块804和视频发布模块805,其中,
第一显示模块801,设置为显示第一目标视频的第一剪辑页面,并获取用户在所述第一剪辑页面中对所述第一目标视频进行第一剪辑操作的第一剪辑信息,所述第一目标视频存储于云端;第一接收模块802,设置为接收作用于所述第一剪辑页面中的切换控件的第一触发操作;第二显示模块803,设置为响应于所述第一触发操作,显示发布页面;第二接收模块804,设置为接收作用于所述发布页面中的发布控件的第二触发操作;视频发布模块805,设置为响应于所述第二触发操作,将所述第一剪辑信息发送至云端,以在云端根据所述第一剪辑信息将所述第一目标视频合成为第二目标视频并进行发布。
本实施例提供的视频的处理装置,通过第一显示模块801显示云端存储的第一目标视频的第一剪辑页面,并获取用户在该第一剪辑页面中对第一目标视频进行第一剪辑操作的第一剪辑信息,第一接收模块802接收作用于第一剪辑页面中的切换控件的第一触发操作,第二显示模块803响应于该第一触发操作,显示发布页面,第二接收模块804接收作用于发布页面中的发布控件的第二触发操作,视频发布模块805响应于该第二触发操作,将所获取的第一剪辑信息发送至云端,以在云端根据该第一剪辑信息将第一目标视频合成为第二目标视频并进行发布。本实施例通过采用上述技术方案,支持用户对已上传的第一目标视频进行云剪辑,并在接收到用户发布第二目标视频的操作时,在云端将第 一目标视频合成为第二目标视频,从而,无需用户将第一目标视频下载到本地进行剪辑或等待第二目标视频合成,也无需再次上传合成的第二目标视频,能够简化用户对位于云端的视频进行修改时所需的操作,并减少用户的等待时间。
本实施例提供的视频的处理装置还可以包括:视频合成模块,设置为在所述显示第一目标视频的第一剪辑页面之前,将用户选择的待合成视频素材合成为所述第一目标视频,所述待合成视频素材包括第一待合成视频素材和/或第二待合成视频素材,所述第一待合成视频素材存储于云端,所述第二待合成视频素材存储于本地。
在上述方案中,所述视频合成模块可以包括:第一接收单元,设置为接收针对所述第一目标视频的发布操作;第一合成单元,设置为响应于所述发布操作,请求云端将用户选择的待合成视频素材合成为所述第一目标视频并进行发布。
在上述方案中,所述视频合成模块可以包括:第二接收单元,设置为接收针对所述第一目标视频的保存操作;第二合成单元,设置为响应于所述保存操作,请求云端将用户选择的待合成视频素材合成为所述第一目标视频并进行存储。
在上述方案中,所述待合成视频素材可以包括第二待合成视频素材,所述视频合成模块可以包括:第三接收单元,设置为接收针对所述第一目标视频的上传操作;时间获取单元,设置为响应于所述上传操作,获取将用户选择的第二待合成视频素材上传至云端所需的第一时间长度,以及,将用户选择的第二待合成视频素材合成为所述第一目标视频并将所述第一目标视频上传至云端所需的第二时间长度;第一上传单元,设置为在所述第一时间长度小于或等于所述第二时间长度时,将所述第二待合成视频素材上传至云端,以在云端将上传后的第二待合成视频素材合成为第一目标视频;第二上传单元,设置为在所述第一时间长度大于所述第二时间长度时,将所述第二待合成视频素材合成为第一目标视频,并将所述第一目标视频上传至云端。
在上述方案中,所述视频合成模块可以设置为:显示用户选择的待合成视频素材的第二剪辑页面,并获取用户在所述第二剪辑页面中对所述待合成视频素材进行第二剪辑操作的第二剪辑信息;将所述第二剪辑信息发送至云端,以在云端根据所述第二剪辑信息将所述待合成视频素材合成为所述第一目标视频。
在上述方案中,所述视频合成模块可以设置为:显示用户选择的待合成视频素材的第二剪辑页面,并获取用户在所述第二剪辑页面中对所述待合成视频素材进行第二剪辑操作的第二剪辑信息;根据所述第二剪辑信息将所述待合成视频素材合成为所述第一目标视频。
本公开实施例提供的视频的处理装置可执行本公开任意实施例提供的视频的处理方法,具备执行视频的处理方法相应的功能模块和效果。未在本实施例中详尽描述的技术细节,可参见本公开任意实施例所提供的视频的处理方法。
下面参考图9,其示出了适于用来实现本公开实施例的电子设备(例如终端设备)900的结构示意图。本公开实施例中的终端设备可以包括但不限于诸如移动电话、笔记本电脑、数字广播接收器、个人数字助理(Personal Digital Assistant,PDA)、平板电脑(PAD)、便携式多媒体播放器(Portable Media Player,PMP)、车载终端(例如车载导航终端)等等的移动终端以及诸如数字电视(Television,TV)、台式计算机等等的固定终端。图9示出的电子设备仅仅是一个示例,不应对本公开实施例的功能和使用范围带来任何限制。
如图9所示,电子设备900可以包括处理装置(例如中央处理器、图形处理器等)901,其可以根据存储在只读存储器(Read-Only Memory,ROM)902中的程序或者从存储装置908加载到随机访问存储器(Random Access Memory,RAM)903中的程序而执行多种适当的动作和处理。在RAM 903中,还存储有电子设备900操作所需的多种程序和数据。处理装置901、ROM 902以及RAM903通过总线904彼此相连。输入/输出(Input/Output,I/O)接口905也连接至总线904。
通常,以下装置可以连接至I/O接口905:包括例如触摸屏、触摸板、键盘、鼠标、摄像头、麦克风、加速度计、陀螺仪等的输入装置906;包括例如液晶显示器(Liquid Crystal Display,LCD)、扬声器、振动器等的输出装置907;包括例如磁带、硬盘等的存储装置908;以及通信装置909。通信装置909可以允许电子设备900与其他设备进行无线或有线通信以交换数据。虽然图9示出了具有多种装置的电子设备900,并不要求实施或具备所有示出的装置。可以替代地实施或具备更多或更少的装置。
根据本公开的实施例,上文参考流程图描述的过程可以被实现为计算机软件程序。例如,本公开的实施例包括一种计算机程序产品,其包括承载在非暂态计算机可读介质上的计算机程序,该计算机程序包含用于执行流程图所示的方法的程序代码。在这样的实施例中,该计算机程序可以通过通信装置909从网络上被下载和安装,或者从存储装置908被安装,或者从ROM 902被安装。在该计算机程序被处理装置901执行时,执行本公开实施例的方法中限定的上述功能。
本公开上述的计算机可读介质可以是计算机可读信号介质或者计算机可读存储介质或者是上述两者的任意组合。计算机可读存储介质例如可以是——但 不限于——电、磁、光、电磁、红外线、或半导体的系统、装置或器件,或者任意以上的组合。计算机可读存储介质的例子可以包括但不限于:具有一个或多个导线的电连接、便携式计算机磁盘、硬盘、RAM、ROM、可擦式可编程只读存储器(Erasable Programmable Read-Only Memory,EPROM或闪存)、光纤、便携式紧凑磁盘只读存储器(Compact Disc Read-Only Memory,CD-ROM)、光存储器件、磁存储器件、或者上述的任意合适的组合。在本公开中,计算机可读存储介质可以是任何包含或存储程序的有形介质,该程序可以被指令执行系统、装置或者器件使用或者与其结合使用。而在本公开中,计算机可读信号介质可以包括在基带中或者作为载波一部分传播的数据信号,其中承载了计算机可读的程序代码。这种传播的数据信号可以采用多种形式,包括但不限于电磁信号、光信号或上述的任意合适的组合。计算机可读信号介质还可以是计算机可读存储介质以外的任何计算机可读介质,该计算机可读信号介质可以发送、传播或者传输用于由指令执行系统、装置或者器件使用或者与其结合使用的程序。计算机可读介质上包含的程序代码可以用任何适当的介质传输,包括但不限于:电线、光缆、射频(Radio Frequency,RF)等等,或者上述的任意合适的组合。
在一些实施方式中,客户端、服务器可以利用诸如超文本传输协议(HyperText Transfer Protocol,HTTP)之类的任何当前已知或未来研发的网络协议进行通信,并且可以与任意形式或介质的数字数据通信(例如,通信网络)互连。通信网络的示例包括局域网(Local Area Network,LAN),广域网(Wide Area Network,WAN),网际网(例如,互联网)以及端对端网络(例如,ad hoc端对端网络),以及任何当前已知或未来研发的网络。
上述计算机可读介质可以是上述电子设备中所包含的;也可以是单独存在,而未装配入该电子设备中。
上述计算机可读介质承载有一个或者多个程序,当上述一个或者多个程序被该电子设备执行时,使得该电子设备:显示第一目标视频的第一剪辑页面,并获取用户在所述第一剪辑页面中对所述第一目标视频进行第一剪辑操作的第一剪辑信息,所述第一目标视频存储于云端;接收作用于所述第一剪辑页面中的切换控件的第一触发操作;响应于所述第一触发操作,显示发布页面;接收作用于所述发布页面中的发布控件的第二触发操作;响应于所述第二触发操作,将所述第一剪辑信息发送至云端,以在云端根据所述第一剪辑信息将所述第一目标视频合成为第二目标视频并进行发布。
可以以一种或多种程序设计语言或其组合来编写用于执行本公开的操作的计算机程序代码,上述程序设计语言包括但不限于面向对象的程序设计语言— 诸如Java、Smalltalk、C++,还包括常规的过程式程序设计语言—诸如“C”语言或类似的程序设计语言。程序代码可以完全地在用户计算机上执行、部分地在用户计算机上执行、作为一个独立的软件包执行、部分在用户计算机上部分在远程计算机上执行、或者完全在远程计算机或服务器上执行。在涉及远程计算机的情形中,远程计算机可以通过任意种类的网络——包括LAN或WAN—连接到用户计算机,或者,可以连接到外部计算机(例如利用因特网服务提供商来通过因特网连接)。
附图中的流程图和框图,图示了按照本公开多种实施例的系统、方法和计算机程序产品的可能实现的体系架构、功能和操作。在这点上,流程图或框图中的每个方框可以代表一个模块、程序段、或代码的一部分,该模块、程序段、或代码的一部分包含一个或多个用于实现规定的逻辑功能的可执行指令。也应当注意,在有些作为替换的实现中,方框中所标注的功能也可以以不同于附图中所标注的顺序发生。例如,两个接连地表示的方框实际上可以基本并行地执行,它们有时也可以按相反的顺序执行,这依所涉及的功能而定。也要注意的是,框图和/或流程图中的每个方框、以及框图和/或流程图中的方框的组合,可以用执行规定的功能或操作的专用的基于硬件的系统来实现,或者可以用专用硬件与计算机指令的组合来实现。
描述于本公开实施例中所涉及到的单元可以通过软件的方式实现,也可以通过硬件的方式来实现。其中,模块的名称在一种情况下并不构成对该单元本身的限定。
本文中以上描述的功能可以至少部分地由一个或多个硬件逻辑部件来执行。例如,非限制性地,可以使用的示范类型的硬件逻辑部件包括:现场可编程门阵列(Field Programmable Gate Array,FPGA)、专用集成电路(Application Specific Integrated Circuit,ASIC)、专用标准产品(Application Specific Standard Parts,ASSP)、片上系统(System on Chip,SOC)、复杂可编程逻辑设备(Complex Programmable Logic Device CPLD)等等。
在本公开的上下文中,机器可读介质可以是有形的介质,其可以包含或存储以供指令执行系统、装置或设备使用或与指令执行系统、装置或设备结合地使用的程序。机器可读介质可以是机器可读信号介质或机器可读储存介质。机器可读介质可以包括但不限于电子的、磁性的、光学的、电磁的、红外的、或半导体系统、装置或设备,或者上述内容的任何合适组合。机器可读存储介质的示例会包括基于一个或多个线的电气连接、便携式计算机盘、硬盘、RAM、ROM、EPROM或快闪存储器、光纤、CD-ROM、光学储存设备、磁储存设备、或上述内容的任何合适组合。
根据本公开的一个或多个实施例,示例1提供了一种视频的处理方法,包括:
显示第一目标视频的第一剪辑页面,并获取用户在所述第一剪辑页面中对所述第一目标视频进行第一剪辑操作的第一剪辑信息,其中,所述第一目标视频存储于云端;
接收作用于所述第一剪辑页面中的切换控件的第一触发操作;
响应于所述第一触发操作,显示发布页面;
接收作用于所述发布页面中的发布控件的第二触发操作;
响应于所述第二触发操作,将所述第一剪辑信息发送至所述云端,以在所述云端根据所述第一剪辑信息将所述第一目标视频合成为第二目标视频并进行发布。
根据本公开的一个或多个实施例,示例2根据示例1所述的方法,在所述显示第一目标视频的第一剪辑页面之前,还包括:
将用户选择的待合成视频素材合成为所述第一目标视频,其中,所述待合成视频素材包括第一待合成视频素材和/或第二待合成视频素材,所述第一待合成视频素材存储于云端,所述第二待合成视频素材存储于本地。
根据本公开的一个或多个实施例,示例3根据示例2所述的方法,所述将用户选择的待合成视频素材合成为所述第一目标视频,包括:
接收针对所述第一目标视频的发布操作;
响应于所述发布操作,请求所述云端将所述用户选择的待合成视频素材合成为所述第一目标视频并进行发布。
根据本公开的一个或多个实施例,示例4根据示例2所述的方法,所述将用户选择的待合成视频素材合成为所述第一目标视频,包括:
接收针对所述第一目标视频的保存操作;
响应于所述保存操作,请求所述云端将所述用户选择的待合成视频素材合成为所述第一目标视频并进行存储。
根据本公开的一个或多个实施例,示例5根据示例2所述的方法,所述待合成视频素材包括第二待合成视频素材,所述将用户选择的待合成视频素材合成为所述第一目标视频,包括:
接收针对所述第一目标视频的上传操作;
响应于所述上传操作,获取将所述用户选择的第二待合成视频素材上传至所述云端所需的第一时间长度,以及,将所述用户选择的第二待合成视频素材合成为所述第一目标视频并将所述第一目标视频上传至所述云端所需的第二时间长度;
如果所述第一时间长度小于或等于所述第二时间长度,则将所述第二待合成视频素材上传至所述云端,以在所述云端将上传后的所述第二待合成视频素材合成为所述第一目标视频;
如果所述第一时间长度大于所述第二时间长度,则将所述第二待合成视频素材合成为所述第一目标视频,并将所述第一目标视频上传至所述云端。
根据本公开的一个或多个实施例,示例6根据示例2所述的方法,所述将用户选择的待合成视频素材合成为所述第一目标视频,包括:
显示所述用户选择的待合成视频素材的第二剪辑页面,并获取用户在所述第二剪辑页面中对所述待合成视频素材进行第二剪辑操作的第二剪辑信息;
将所述第二剪辑信息发送至所述云端,以在所述云端根据所述第二剪辑信息将所述待合成视频素材合成为所述第一目标视频。
根据本公开的一个或多个实施例,示例7根据示例2所述的方法,所述将用户选择的待合成视频素材合成为所述第一目标视频,包括:
显示所述用户选择的待合成视频素材的第二剪辑页面,并获取用户在所述第二剪辑页面中对所述待合成视频素材进行第二剪辑操作的第二剪辑信息;
根据所述第二剪辑信息将所述待合成视频素材合成为所述第一目标视频。
根据本公开的一个或多个实施例,示例8提供了一种视频的处理装置,包括:
第一显示模块,设置为显示第一目标视频的第一剪辑页面,并获取用户在所述第一剪辑页面中对所述第一目标视频进行第一剪辑操作的第一剪辑信息,其中,所述第一目标视频存储于云端;
第一接收模块,设置为接收作用于所述第一剪辑页面中的切换控件的第一触发操作;
第二显示模块,设置为响应于所述第一触发操作,显示发布页面;
第二接收模块,设置为接收作用于所述发布页面中的发布控件的第二触发操作;
视频发布模块,设置为响应于所述第二触发操作,将所述第一剪辑信息发送至所述云端,以在所述云端根据所述第一剪辑信息将所述第一目标视频合成 为第二目标视频并进行发布。
根据本公开的一个或多个实施例,示例9提供了一种电子设备,包括:
一个或多个处理器;
存储器,设置为存储一个或多个程序;
当所述一个或多个程序被所述一个或多个处理器执行,使得所述一个或多个处理器实现如示例1-7中任一所述的视频的处理方法。
根据本公开的一个或多个实施例,示例10提供了一种计算机可读存储介质,其上存储有计算机程序,该程序被处理器执行时实现如示例1-7中任一所述的视频的处理方法。
此外,虽然采用特定次序描绘了多个操作,但是这不应当理解为要求这些操作以所示出的特定次序或以顺序次序执行来执行。在一定环境下,多任务和并行处理可能是有利的。同样地,虽然在上面论述中包含了多个实现细节,但是这些不应当被解释为对本公开的范围的限制。在单独的实施例的上下文中描述的一些特征还可以组合地实现在单个实施例中。相反地,在单个实施例的上下文中描述的多种特征也可以单独地或以任何合适的子组合的方式实现在多个实施例中。

Claims (10)

  1. 一种视频的处理方法,包括:
    显示第一目标视频的第一剪辑页面,并获取用户在所述第一剪辑页面中对所述第一目标视频进行第一剪辑操作的第一剪辑信息,其中,所述第一目标视频存储于云端;
    接收作用于所述第一剪辑页面中的切换控件的第一触发操作;
    响应于所述第一触发操作,显示发布页面;
    接收作用于所述发布页面中的发布控件的第二触发操作;
    响应于所述第二触发操作,将所述第一剪辑信息发送至所述云端,以在所述云端根据所述第一剪辑信息将所述第一目标视频合成为第二目标视频并进行发布。
  2. 根据权利要求1所述的方法,在所述显示第一目标视频的第一剪辑页面之前,还包括:
    将用户选择的待合成视频素材合成为所述第一目标视频,其中,所述待合成视频素材包括第一待合成视频素材和第二待合成视频素材中的至少之一,所述第一待合成视频素材存储于所述云端,所述第二待合成视频素材存储于本地。
  3. 根据权利要求2所述的方法,其中,所述将用户选择的待合成视频素材合成为所述第一目标视频,包括:
    接收针对所述第一目标视频的发布操作;
    响应于所述发布操作,请求所述云端将所述用户选择的待合成视频素材合成为所述第一目标视频并进行发布。
  4. 根据权利要求2所述的方法,其中,所述将用户选择的待合成视频素材合成为所述第一目标视频,包括:
    接收针对所述第一目标视频的保存操作;
    响应于所述保存操作,请求所述云端将所述用户选择的待合成视频素材合成为所述第一目标视频并进行存储。
  5. 根据权利要求2所述的方法,其中,所述待合成视频素材包括第二待合成视频素材,所述将用户选择的待合成视频素材合成为所述第一目标视频,包括:
    接收针对所述第一目标视频的上传操作;
    响应于所述上传操作,获取将所述用户选择的第二待合成视频素材上传至 所述云端所需的第一时间长度,以及,将所述用户选择的第二待合成视频素材合成为所述第一目标视频并将所述第一目标视频上传至所述云端所需的第二时间长度;
    在所述第一时间长度小于或等于所述第二时间长度的情况下,将所述第二待合成视频素材上传至所述云端,以在所述云端将上传后的所述第二待合成视频素材合成为所述第一目标视频;
    在所述第一时间长度大于所述第二时间长度的情况下,将所述第二待合成视频素材合成为所述第一目标视频,并将所述第一目标视频上传至所述云端。
  6. 根据权利要求2所述的方法,其中,所述将用户选择的待合成视频素材合成为所述第一目标视频,包括:
    显示所述用户选择的待合成视频素材的第二剪辑页面,并获取用户在所述第二剪辑页面中对所述待合成视频素材进行第二剪辑操作的第二剪辑信息;
    将所述第二剪辑信息发送至所述云端,以在所述云端根据所述第二剪辑信息将所述待合成视频素材合成为所述第一目标视频。
  7. 根据权利要求2所述的方法,其中,所述将用户选择的待合成视频素材合成为所述第一目标视频,包括:
    显示所述用户选择的待合成视频素材的第二剪辑页面,并获取用户在所述第二剪辑页面中对所述待合成视频素材进行第二剪辑操作的第二剪辑信息;
    根据所述第二剪辑信息将所述待合成视频素材合成为所述第一目标视频。
  8. 一种视频的处理装置,包括:
    第一显示模块,设置为显示第一目标视频的第一剪辑页面,并获取用户在所述第一剪辑页面中对所述第一目标视频进行第一剪辑操作的第一剪辑信息,其中,所述第一目标视频存储于云端;
    第一接收模块,设置为接收作用于所述第一剪辑页面中的切换控件的第一触发操作;
    第二显示模块,设置为响应于所述第一触发操作,显示发布页面;
    第二接收模块,设置为接收作用于所述发布页面中的发布控件的第二触发操作;
    视频发布模块,设置为响应于所述第二触发操作,将所述第一剪辑信息发送至所述云端,以在所述云端根据所述第一剪辑信息将所述第一目标视频合成为第二目标视频并进行发布。
  9. 一种电子设备,包括:
    至少一个处理器;
    存储器,设置为存储至少一个程序;
    当所述至少一个程序被所述至少一个处理器执行,使得所述至少一个处理器实现如权利要求1-7中任一项所述的视频的处理方法。
  10. 一种计算机可读存储介质,存储有计算机程序,其中,所述程序被处理器执行时实现如权利要求1-7中任一项所述的视频的处理方法。
PCT/CN2022/080276 2021-03-15 2022-03-11 视频的处理方法、装置、电子设备和存储介质 WO2022194031A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP22770386.5A EP4307694A1 (en) 2021-03-15 2022-03-11 Video processing method and apparatus, electronic device, and storage medium
US18/468,508 US20240005961A1 (en) 2021-03-15 2023-09-15 Video processing method and apparatus, and electronic device and storage medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110278694.5 2021-03-15
CN202110278694.5A CN113038234B (zh) 2021-03-15 2021-03-15 视频的处理方法、装置、电子设备和存储介质

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US18/468,508 Continuation US20240005961A1 (en) 2021-03-15 2023-09-15 Video processing method and apparatus, and electronic device and storage medium

Publications (1)

Publication Number Publication Date
WO2022194031A1 true WO2022194031A1 (zh) 2022-09-22

Family

ID=76470642

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/080276 WO2022194031A1 (zh) 2021-03-15 2022-03-11 视频的处理方法、装置、电子设备和存储介质

Country Status (4)

Country Link
US (1) US20240005961A1 (zh)
EP (1) EP4307694A1 (zh)
CN (1) CN113038234B (zh)
WO (1) WO2022194031A1 (zh)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113038234B (zh) * 2021-03-15 2023-07-21 北京字跳网络技术有限公司 视频的处理方法、装置、电子设备和存储介质
CN113727145B (zh) * 2021-11-04 2022-03-01 飞狐信息技术(天津)有限公司 视频发布的方法及装置、电子设备、存储介质
CN114299691A (zh) * 2021-12-23 2022-04-08 北京字跳网络技术有限公司 视频获取方法、信息显示方法、装置、设备和介质

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150139615A1 (en) * 2013-11-19 2015-05-21 SketchPost, LLC Mobile video editing and sharing for social media
CN106973304A (zh) * 2017-02-14 2017-07-21 北京时间股份有限公司 基于云端的非线性剪辑方法、装置及系统
CN108924647A (zh) * 2018-07-27 2018-11-30 深圳众思科技有限公司 视频编辑方法、视频编辑装置、终端
CN109194887A (zh) * 2018-10-26 2019-01-11 北京亿幕信息技术有限公司 一种云剪视频录制及剪辑方法和插件
CN112261416A (zh) * 2020-10-20 2021-01-22 广州博冠信息科技有限公司 基于云的视频处理方法、装置、存储介质与电子设备
CN113038234A (zh) * 2021-03-15 2021-06-25 北京字跳网络技术有限公司 视频的处理方法、装置、电子设备和存储介质

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2007082167A2 (en) * 2006-01-05 2007-07-19 Eyespot Corporation System and methods for storing, editing, and sharing digital video
US11664053B2 (en) * 2017-10-04 2023-05-30 Hashcut, Inc. Video clip, mashup and annotation platform
CN109040770A (zh) * 2018-08-27 2018-12-18 佛山龙眼传媒科技有限公司 一种在线剪辑的方法、系统
CN109068148A (zh) * 2018-09-03 2018-12-21 视联动力信息技术股份有限公司 一种视频处理的方法和装置
CN109889882B (zh) * 2019-01-24 2021-06-18 深圳亿幕信息科技有限公司 一种视频剪辑合成方法和系统

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150139615A1 (en) * 2013-11-19 2015-05-21 SketchPost, LLC Mobile video editing and sharing for social media
CN106973304A (zh) * 2017-02-14 2017-07-21 北京时间股份有限公司 基于云端的非线性剪辑方法、装置及系统
CN108924647A (zh) * 2018-07-27 2018-11-30 深圳众思科技有限公司 视频编辑方法、视频编辑装置、终端
CN109194887A (zh) * 2018-10-26 2019-01-11 北京亿幕信息技术有限公司 一种云剪视频录制及剪辑方法和插件
CN112261416A (zh) * 2020-10-20 2021-01-22 广州博冠信息科技有限公司 基于云的视频处理方法、装置、存储介质与电子设备
CN113038234A (zh) * 2021-03-15 2021-06-25 北京字跳网络技术有限公司 视频的处理方法、装置、电子设备和存储介质

Also Published As

Publication number Publication date
EP4307694A1 (en) 2024-01-17
CN113038234B (zh) 2023-07-21
US20240005961A1 (en) 2024-01-04
CN113038234A (zh) 2021-06-25

Similar Documents

Publication Publication Date Title
WO2022194031A1 (zh) 视频的处理方法、装置、电子设备和存储介质
WO2021073315A1 (zh) 视频文件的生成方法、装置、终端及存储介质
CN111629252B (zh) 视频处理方法、装置、电子设备及计算机可读存储介质
US11670339B2 (en) Video acquisition method and device, terminal and medium
WO2022152064A1 (zh) 视频生成方法、装置、电子设备和存储介质
WO2022143924A1 (zh) 视频生成方法、装置、电子设备和存储介质
WO2022105272A1 (zh) 歌词特效展示方法、装置、电子设备及计算机可读介质
WO2023011142A1 (zh) 视频的处理方法、装置、电子设备和存储介质
BR112013004857B1 (pt) Método implementado por computador e sistema para controlar, usando um dispositivo móvel, apresentação de conteúdo de mídia realizado por um cliente de mídia, e método implementado por computador para apresentar conteúdo de mídia a partir de um cliente de mídia em um dispositivo de exibição
WO2022052838A1 (zh) 视频文件的处理方法、装置、电子设备及计算机存储介质
WO2021135648A1 (zh) 直播间礼物列表配置方法、装置、介质及电子设备
JP2023523067A (ja) ビデオ処理方法、装置、機器及び媒体
WO2022237571A1 (zh) 图像融合方法、装置、电子设备和存储介质
WO2023116480A1 (zh) 多媒体内容的发布方法、装置、设备、介质和程序产品
WO2023103889A1 (zh) 视频处理方法、装置、电子设备及存储介质
WO2023005831A1 (zh) 一种资源播放方法、装置、电子设备和存储介质
WO2023169356A1 (zh) 图像处理方法、装置、设备及存储介质
WO2023011176A1 (zh) 背景图的生成方法、装置、电子设备和存储介质
WO2024002120A1 (zh) 媒体内容展示方法、装置、设备及存储介质
US20240064367A1 (en) Video processing method and apparatus, electronic device, and storage medium
US20240119970A1 (en) Method and apparatus for multimedia resource clipping scenario, device and storage medium
WO2024104333A1 (zh) 演播画面的处理方法、装置、电子设备及存储介质
WO2023207543A1 (zh) 媒体内容的发布方法、装置、设备、存储介质和程序产品
WO2023155708A1 (zh) 视角的切换方法、装置、电子设备、存储介质和程序产品
WO2023134509A1 (zh) 视频推流方法、装置、终端设备及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22770386

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 2022770386

Country of ref document: EP

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2022770386

Country of ref document: EP

Effective date: 20231013