WO2021196903A1 - 视频处理方法、装置、可读介质及电子设备 - Google Patents

视频处理方法、装置、可读介质及电子设备 Download PDF

Info

Publication number
WO2021196903A1
WO2021196903A1 PCT/CN2021/076415 CN2021076415W WO2021196903A1 WO 2021196903 A1 WO2021196903 A1 WO 2021196903A1 CN 2021076415 W CN2021076415 W CN 2021076415W WO 2021196903 A1 WO2021196903 A1 WO 2021196903A1
Authority
WO
WIPO (PCT)
Prior art keywords
highlight
highlight segment
target
segment
editing
Prior art date
Application number
PCT/CN2021/076415
Other languages
English (en)
French (fr)
Inventor
钟珂
林彩文
李以杰
常坤
孙振邦
龙清
唐良杰
付平非
林兆钦
李琰
Original Assignee
北京字节跳动网络技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京字节跳动网络技术有限公司 filed Critical 北京字节跳动网络技术有限公司
Priority to EP21778706.8A priority Critical patent/EP4131981A4/en
Publication of WO2021196903A1 publication Critical patent/WO2021196903A1/zh
Priority to US17/885,459 priority patent/US20220385997A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/85Assembly of content; Generation of multimedia applications
    • H04N21/854Content authoring
    • H04N21/8549Creating video summaries, e.g. movie trailer
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/2187Live feed
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/44008Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics in the video stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • H04N21/47205End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for manipulating displayed content, e.g. interacting with MPEG-4 objects, editing locally
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/478Supplemental services, e.g. displaying phone caller identification, shopping application
    • H04N21/4788Supplemental services, e.g. displaying phone caller identification, shopping application communicating with other users, e.g. chatting
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/845Structuring of content, e.g. decomposing content into time segments
    • H04N21/8456Structuring of content, e.g. decomposing content into time segments by decomposing the content in the time domain, e.g. in time segments

Definitions

  • the present disclosure relates to the field of video technology, and in particular, to a video processing method, device, readable medium, and electronic equipment.
  • the present disclosure provides a video processing method.
  • the method includes: acquiring a first highlight segment obtained by performing highlight recognition on a target video; and displaying the first highlight segment in response to an editing instruction for the first target highlight segment.
  • the editing page corresponding to the target highlight segment where the editing page is used by the user to perform an editing operation on the first target highlight segment; according to the user's editing operation, the first target highlight segment is processed to obtain a second highlight segment Segment; in response to receiving a release instruction for the second highlight segment, release the second highlight segment.
  • the present disclosure provides a video processing device.
  • the device includes: an acquisition module, configured to acquire a first highlight segment obtained by performing highlight recognition on a target video; and a first display module, configured to respond to the first highlight segment
  • the editing instruction of the target highlight segment displays the editing page corresponding to the first target highlight segment, and the editing page is used by the user to perform editing operations on the first target highlight segment;
  • the first processing module is used to perform editing operations on the first target highlight segment;
  • An editing operation is to process the first target highlight segment to obtain a second highlight segment;
  • the first publishing module is configured to publish the second highlight segment in response to receiving a publishing instruction for the second highlight segment .
  • the present disclosure provides a computer-readable medium on which a computer program is stored, and when the program is executed by a processing device, the steps of the method provided in the first aspect of the present disclosure are implemented.
  • the present disclosure provides an electronic device, including: a storage device on which a computer program is stored; and a processing device, configured to execute the computer program in the storage device, so as to implement what is provided in the first aspect of the present disclosure The steps of the method.
  • the present disclosure provides a computer program product, including: a computer program, the computer program is stored in a readable storage medium, and at least one processor of an electronic device can read the computer program from the readable storage medium.
  • a computer program where the at least one processor executes the computer program, so that the electronic device executes the steps of the method provided in the first aspect of the present disclosure.
  • the present disclosure provides a computer program, the computer program is stored in a readable storage medium, and at least one processor of an electronic device can read the computer program from the readable storage medium. At least one processor executes the computer program, so that the electronic device executes the steps of the method provided in the first aspect of the present disclosure.
  • the first highlight segment obtained by performing highlight recognition on the target video is first obtained, and the first highlight segment may be a highlight segment in the target video.
  • the terminal may provide an editing page for the user to perform an editing operation on the first target highlight segment.
  • users can edit highlight clips that meet their own needs and publish them, which fits the needs of users and enhances user experience.
  • the audience can directly watch the highlight segment, that is, the highlight segment in the target video, which facilitates the audience to share the video and enhances the video viewing experience.
  • Fig. 1 is a flowchart showing a video processing method according to an exemplary embodiment.
  • Fig. 2a is a schematic diagram showing a video playback page according to an exemplary embodiment.
  • Fig. 2b is a schematic diagram showing a video playback page according to another exemplary embodiment.
  • Fig. 2c is a schematic diagram showing a preview page according to an exemplary embodiment.
  • Fig. 2d is a schematic diagram showing a fragment cropping page according to an exemplary embodiment.
  • Fig. 2e is a schematic diagram showing an effect processing page according to an exemplary embodiment.
  • Fig. 2f is a schematic diagram showing a page to be published according to an exemplary embodiment.
  • Fig. 3a is a schematic diagram showing a video playback page according to another exemplary embodiment.
  • Fig. 3b is a schematic diagram showing a preview page according to another exemplary embodiment.
  • Fig. 4 is a block diagram showing a video processing device according to an exemplary embodiment.
  • Fig. 5 is a schematic structural diagram showing an electronic device according to an exemplary embodiment.
  • Fig. 1 is a flowchart showing a video processing method according to an exemplary embodiment.
  • the method can be applied to terminals such as smart phones, tablet computers, personal computers (PCs), notebook computers and other terminal devices.
  • terminals such as smart phones, tablet computers, personal computers (PCs), notebook computers and other terminal devices.
  • the method may include S101 to S104.
  • the target video may be a video taken by the user.
  • the target video may be a short video shot by the user through the terminal, or a live playback video recorded by the user during the live broadcast on the live broadcast platform. It is worth noting that, in the following introduction of the present disclosure, the target video is a live playback video as an example for illustration, but this does not constitute a limitation to the implementation of the present disclosure.
  • Performing highlight recognition on a target video refers to recognizing a highlight moment in the target video, and the first highlight segment obtained by performing highlight recognition may be a highlight segment in the target video.
  • highlight recognition of the target video can identify one or more highlight segments.
  • the first highlight segment acquired by the terminal may include one or more sub-highlight segments, where each sub-highlight segment is an independent segment. Video clips.
  • the first target highlight segment may be any sub-highlight segment in the first highlight segment, or may include multiple sub-highlight segments selected from multiple sub-highlight segments of the first highlight segment, which is not limited in the present disclosure.
  • Users can select the first target highlight segment for editing according to their own needs.
  • the terminal can provide an entry for editing the first target highlight segment.
  • the entry can be presented in the form of an edit button.
  • an edit for the first target highlight segment can be generated. instruction.
  • the terminal After receiving the editing instruction input by the user, the terminal can display the editing page corresponding to the first target highlight segment, and the editing page can be used by the user to perform editing operations on the first target highlight segment.
  • the first target highlight segment is processed according to the user's editing operation to obtain the second highlight segment.
  • the user's editing operation may be operations such as cropping and/or effect processing on the first target highlight segment, where the effect processing includes, for example, operations such as adding special effects, adding stickers, and adding music.
  • the terminal performs corresponding processing on the first target highlight segment, and the second highlight segment edited by the user can be obtained.
  • the terminal may provide an entrance for publishing the second highlight segment.
  • the presentation form of the entrance may be a publish button.
  • the terminal can receive the user’s input.
  • the second highlight segment can be released, for example, to a live broadcast platform, Moments of Friends, or other video platforms.
  • the first highlight segment obtained by performing highlight recognition on the target video is first obtained, and the first highlight segment may be a highlight segment in the target video.
  • the terminal may provide an editing page for the user to perform an editing operation on the first target highlight segment.
  • users can edit highlight clips that meet their own needs and publish them, which fits the needs of users and enhances user experience.
  • the audience can directly watch the highlight segment, that is, the highlight segment in the target video, which facilitates the audience to share the video and enhances the video viewing experience.
  • highlight recognition of the target video can be performed locally on the terminal or on the server. Based on this, the terminal can obtain the first highlight segment obtained by performing highlight recognition on the target video through the following two implementation manners.
  • the terminal may first obtain the target video, for example, obtain the live playback video recorded by the user during the live broadcast. After acquiring the target video, the terminal itself can perform highlight recognition on the target video to obtain the first highlight segment.
  • the terminal may send the target video to the server, so that the server can perform highlight recognition on the target video. After that, the server can return the highlight recognition result of the target video.
  • the terminal may obtain the first highlight segment according to the highlight recognition result of the server.
  • the terminal or server can perform highlight recognition on the target video in a variety of ways.
  • the terminal or the server may perform highlight recognition on the target video according to a plurality of preset highlight recognition conditions to obtain a highlight recognition result.
  • the preset highlight recognition conditions may be the largest number of viewers, the largest number of comments, the largest number of gifts received, and so on.
  • the terminal or server may input the target video into a pre-trained highlight recognition model, and the highlight recognition model may output the highlight recognition result.
  • the highlight recognition result may include time period information corresponding to each sub-highlight segment in the first highlight segment in the target video.
  • the target video has a total of 60 minutes, and the time period information corresponding to each sub-highlight segment in the target video may be 42:01 ⁇ 42:57, 45:03 ⁇ 45:56, 50:04 ⁇ 50:57 and so on.
  • the terminal can extract each sub-highlight segment from the target video according to the time period information.
  • the highlight recognition result may also include the highlight recognition conditions that each sub-highlight segment meets. Wherein, if the terminal or the server performs recognition based on multiple preset highlight recognition conditions, the highlight recognition conditions that each sub-highlight segment meets may be the preset highlight recognition conditions corresponding to the recognition. If the terminal or server recognizes through the highlight recognition model, the highlight recognition model can output not only the time period information corresponding to each sub-highlight segment in the target video, but also the highlight recognition conditions that the sub-highlight segment meets.
  • the highlight recognition condition that the sub-highlight segment meets can be the largest number of viewers .
  • the sub-highlight segment corresponding to the time period information of 45:03 to 45:56 in the target video is identified based on the largest number of comments during the live broadcast process, and the highlight recognition condition that the sub-highlight segment meets may be the largest number of comments.
  • the sub-highlight segment corresponding to the time period information of 50:04 ⁇ 50:57 in the target video is identified based on the largest number of gifts received during the live broadcast, then the highlight recognition condition that the sub-highlight segment meets can be the largest number of gifts .
  • the terminal may display prompt information, and the prompt information may be used to prompt the user to perform highlight recognition on the target video.
  • the prompt information may be displayed in the form of a prompt box at a preset position of the page (such as the top of the page), and the prompt content may be, for example, "the highlight is being generated".
  • the prompt information may be displayed on the video playback page, where the video playback page may provide the highlight segment recognition results of different live broadcast sessions.
  • the video playback page can be as shown in Figure 2a. In Figure 2a, for today’s 20:00 live playback video, the terminal can display “Highlight clip is being generated during the process of acquiring the first highlight clip” in the corresponding live session. ”Prompt message.
  • the highlight segment may not be recognized, so that the first highlight segment cannot be obtained.
  • the terminal can prompt the user, for example, can display prompt information for prompting the user that the highlight segment is not recognized from the target video.
  • the first highlight segment may not be obtained due to the less interaction of the audience during the live broadcast.
  • the prompt message that the highlight segment is not recognized can be displayed in the live broadcast session. .
  • the video processing method provided in the present disclosure may further include:
  • a video playback page is displayed.
  • the video playback page can carry tag information of the first highlight segment, and the tag information can be used to characterize the feature information of the first highlight segment.
  • the video playback page can be as shown in Figure 2b.
  • the terminal can obtain the first highlight segment, and the first highlight segment may include multiple sub-highlight segments, such as sub-highlights. Segment 201, sub-highlight segment 202, sub-highlight segment 203, and other sub-highlight segments. Due to the limitation of the terminal screen size, not all sub-highlight segments are displayed, and the user can slide left and right to make the terminal display different sub-highlight segments. It is worth noting that the present disclosure does not specifically limit the number of neutron highlight segments in the first highlight segment acquired by the terminal.
  • the video playback page as shown in FIG. 2b may carry the tag information of the first highlight segment.
  • the label information can be used to characterize the feature information of the first highlight segment.
  • the tag information may include one or more of the following: a cover image, a matched highlight recognition condition, time information in the target video, and release information used to indicate whether it has been released.
  • the label information of the first highlight segment may include label information corresponding to multiple sub-highlight segments in the first highlight segment, and the label information corresponding to each sub-highlight segment may be used to characterize the feature information of the sub-highlight segment.
  • the cover image may be any image frame in the sub-highlight segment, which is not specifically limited in the present disclosure.
  • the first frame image in the sub-highlight segment can be used as the cover image by default.
  • the matched highlight recognition condition may be the reason for identifying each sub-highlight segment from the target video, for example, the target video has the largest number of viewers, the largest number of gifts received, the largest number of comments, and so on.
  • the highlight recognition condition of the sub-highlight segment 201 may be the largest number of viewers, and its tag information may include "most viewers”.
  • the highlight recognition condition of the sub-highlight segment 202 may be that the number of comments is the largest, and its tag information may include "most lively”.
  • the highlight recognition condition of the sub-highlight segment 203 may be that the number of gifts received is the largest, and its tag information may include "Gifts are not stopped”.
  • the time information in the target video may be, for example, the few minutes of the sub-highlight segment in the target video.
  • the time information of the sub-highlight segment 201 in the target video may be the 42nd minute.
  • the present disclosure does not specifically limit the presentation form of the time information and the duration of each sub-highlight segment, and the time information may also be, for example, the time period information of the sub-highlight segment in the target video.
  • the publication information used to indicate whether it has been published may include published and unpublished. If the user has released the sub-highlight segment, the release information may indicate that the sub-highlight segment has been released; if the user has not released the sub-highlight segment, the release information may indicate that the sub-highlight segment has not yet been released. For example, the published information of the sub-highlight segments 201 to 203 shown in FIG. 2b are all unpublished.
  • the first target highlight segment may include a plurality of sub-highlight segments in the first highlight segment.
  • this The publicly available video processing methods may also include:
  • the multiple sub-highlight segments are combined to obtain the first target highlight segment.
  • the user can choose which sub-highlight segments in the first highlight segment to use as the first target highlight segment according to their own needs. For example, the user selects the sub-highlight segment 201 and the sub-highlight segment 202, and the terminal may combine multiple sub-highlight segments selected by the user according to the user's selection operation to obtain the first target highlight segment.
  • the present disclosure does not specifically limit the sequence of each sub-highlight segment in the first target highlight segment.
  • the sub-highlight segment 201 may be before or after the sub-highlight segment 202.
  • the user before editing the first target highlight segment, the user may first preview the first target highlight segment. Therefore, before S102, the video processing method provided by the present disclosure may further include:
  • the preview page corresponding to the first target highlight segment is displayed to play the first target highlight segment.
  • the user can click the preview button for the highlight segment of the first target, or click the cover image of the highlight segment of the first target, to generate a preview instruction for the highlight segment of the first target.
  • the terminal can display the preview page corresponding to the first target highlight segment to play the first target highlight segment.
  • the first target highlight segment may also be any sub-highlight segment in the first highlight segment.
  • the user can click the cover image of the sub-highlight segment 201, and the terminal can display the following after receiving the preview instruction for the first target highlight segment
  • the preview page shown in Figure 2c The preview page can be used to play the first target highlight segment, where the area 210 can be used to play the highlight segment.
  • the user can drag the progress bar at the bottom of the page to control the playback of the first target highlight segment.
  • the preview page may also include tag information of the first target highlight segment.
  • the user can click the back button to return to the previous page, that is, the video playback page shown in Figure 2b, or click the delete button to delete the first target highlight segment.
  • the preview page may also provide an edit entry for editing the first target highlight segment.
  • the edit entry may be presented in the form of an edit button, such as the edit button 204 in FIG. 2c.
  • an edit instruction for the first target highlight segment can be generated.
  • the editing operation of the highlight segment by the user may include segment cropping and/or effect processing. Therefore, the editing page corresponding to the first target highlight segment may include a segment cropping page and an effect processing page.
  • the terminal may first display the segment cropping page as shown in FIG. 2d. In this cropping page, the user can crop the first target highlight segment, for example, by dragging the slider 205 in FIG. 2d to crop.
  • the terminal may display the effect processing page as shown in FIG. 2e.
  • the buttons 206 to 209 can respectively indicate different processing effects.
  • the button 206 can be used to add music to the first target highlight segment
  • the button 207 can be used to add special effects to the first target highlight segment
  • the button 208 can be used to add text content to the first target highlight segment
  • the button 209 can be used to add music to the first target highlight segment. Add stickers to the target highlight segment.
  • the effect processing represented by the four buttons is only an exemplary explanation, and the effect processing on the highlight segment in the present disclosure is not limited to these four effects.
  • the terminal may process the first target highlight segment according to editing operations such as clip cropping and effect processing of the first target highlight segment by the user to obtain the second highlight segment edited by the user.
  • the user can click the next button in Figure 2e, and the terminal can display a page to be published for publishing the second highlight segment, and the page to be published can be as shown in Figure 2f.
  • the user can add the title and topic of the second highlight segment to be published.
  • the page to be published may also provide a publishing portal for publishing the second highlight segment.
  • the publishing portal can be presented in the form of a publishing button, such as the publishing button in FIG. 2f.
  • the terminal can receive the publishing instruction for the second highlight segment input by the user, and then can publish the second highlight segment edited by the user.
  • the sub-highlight segment 201 is updated to the second highlight segment edited by the user, and is used to indicate whether to update the published information to be published.
  • the user can edit and publish the sub-highlight segment 202, the sub-highlight segment 203, and other sub-highlight segments or multiple sub-highlight segments as the first target highlight segment.
  • the first target highlight segment includes multiple sub-highlight segments in the first highlight segment
  • the first target highlight segment is processed according to the user's editing operation to obtain the second highlight segment. Steps can include:
  • the multiple sub-highlight segments are processed, and the processed multiple sub-highlight segments are merged into a second highlight segment.
  • the user can select which sub-highlight segments in the first highlight segment to edit according to their own needs, and the terminal can process these multiple sub-highlight segments according to the user's editing operations, and merge the processed multiple sub-highlight segments , Merge into the second highlight segment.
  • the second highlight segment can be used as a collection of highlight segments edited by the user.
  • the present disclosure does not specifically limit the order of the processed sub-highlight segments in the merged second highlight segment.
  • Fig. 3a is a schematic diagram showing a video playback page according to another exemplary embodiment.
  • the published highlight segment 301 can correspond to the sub-highlight segment 201
  • the published highlight segment 302 can correspond to the sub-highlight segment 202
  • the published highlight segment 303 can correspond to the sub-highlight segment 203.
  • the video processing method may further include:
  • the editing page corresponding to the second target highlight segment is displayed; according to the editing operation of the user, the second target highlight segment is processed to obtain the third highlight segment; The release instruction of the third highlight segment, release the third highlight segment.
  • the second target highlight segment can be any highlight segment that has been released.
  • the published highlight segment 301 is used as the second target highlight segment.
  • the terminal can display the preview page as shown in FIG. 3b.
  • a re-editing entry may be provided, such as the re-editing button 304 in the preview page, where the re-editing refers to the re-editing of the sub-highlight segment 201.
  • the user can operate the re-editing entry, for example, clicking the re-editing button 304 to generate an editing instruction for the second target highlight segment.
  • the terminal After receiving the editing instruction, the terminal can display the editing page corresponding to the second target highlight segment.
  • the editing page can be used for the user to perform editing operations on the second target highlight segment. Afterwards, according to the editing operation of the user, the second target highlight segment is processed to obtain the third highlight segment.
  • the editing page may also include a fragment cutting page and an effect processing page, which may be shown in Figure 2d and Figure 2e, and will not be repeated here.
  • the terminal may publish the third highlight segment after receiving the release instruction for the third highlight segment input by the user.
  • Fig. 4 is a block diagram of a video processing device according to an exemplary embodiment. As shown in FIG. 4, the video processing device 400 may include:
  • the obtaining module 401 is configured to obtain the first highlight segment obtained by performing highlight recognition on the target video;
  • the first display module 402 is configured to display an editing page corresponding to the first target highlight segment in response to an editing instruction for the first target highlight segment, and the editing page is used for the user to perform an editing operation on the first target highlight segment ;
  • the first processing module 403 is configured to process the first target highlight segment according to the user's editing operation to obtain a second highlight segment;
  • the first publishing module 404 is configured to publish the second highlight segment in response to receiving a publishing instruction for the second highlight segment.
  • the first highlight segment obtained by performing highlight recognition on the target video is first obtained, and the first highlight segment may be a highlight segment in the target video.
  • the terminal may provide an editing page for the user to perform an editing operation on the first target highlight segment.
  • users can edit highlight clips that meet their own needs and publish them, which fits the needs of users and enhances user experience.
  • the audience can directly watch the highlight segment, that is, the highlight segment in the target video, which facilitates the audience to share the video and enhances the video viewing experience.
  • the device 400 may further include: a second display module, configured to display a video playback page after the acquisition module 401 acquires the first highlight segment obtained by performing highlight recognition on the target video, and the video playback
  • the page carries label information of the first highlight segment, and the label information is used to characterize the feature information of the first highlight segment.
  • the first target highlight segment includes multiple sub-highlight segments in the first highlight segment; the device 400 may further include: a merging module, configured to respond to the first display module 402 in response to the first highlight segment; An editing instruction of a target highlight segment, before displaying the editing page corresponding to the first target highlight segment, merge the multiple sub-highlight segments according to the user's selection operation to obtain the first target highlight segment.
  • a merging module configured to respond to the first display module 402 in response to the first highlight segment
  • An editing instruction of a target highlight segment before displaying the editing page corresponding to the first target highlight segment, merge the multiple sub-highlight segments according to the user's selection operation to obtain the first target highlight segment.
  • the first target highlight segment includes a plurality of sub-highlight segments in the first highlight segment; the first processing module 403 may be configured to perform processing on the multiple sub-highlight segments according to the editing operation of the user. The segments are processed, and the processed multiple sub-highlight segments are combined into the second highlight segment.
  • the device 400 may further include: a third display module, configured to display an editing page corresponding to the second target highlight segment in response to an editing instruction for the second target highlight segment, wherein the second target highlight segment
  • the target highlight segment is any highlight segment that has been published.
  • the editing page is used for the user to perform editing operations on the second target highlight segment;
  • the second processing module is used to perform editing operations on the second target highlight segment based on the user’s editing operations.
  • the target highlight segment is processed to obtain a third highlight segment;
  • the second publishing module is configured to publish the third highlight segment in response to receiving a publishing instruction for the third highlight segment.
  • the acquisition module 401 is configured to acquire the target video; perform highlight recognition on the target video to obtain the first highlight segment; or, send the target video to a server so that the server can Perform highlight recognition on the target video; obtain the first highlight segment according to the highlight recognition result of the server.
  • the device 400 may further include: a prompt module, configured to display prompt information during the process of obtaining the first highlight segment by the obtaining module 401, and the prompt information is used to prompt the user that the user is The target video performs highlight recognition.
  • a prompt module configured to display prompt information during the process of obtaining the first highlight segment by the obtaining module 401, and the prompt information is used to prompt the user that the user is The target video performs highlight recognition.
  • the device 400 may further include: a fourth display module, configured to display the edit corresponding to the first target highlight segment in the first display module 402 in response to the editing instruction for the first target highlight segment Before the page, in response to the preview instruction for the first target highlight segment, the preview page corresponding to the first target highlight segment is displayed to play the first target highlight segment.
  • a fourth display module configured to display the edit corresponding to the first target highlight segment in the first display module 402 in response to the editing instruction for the first target highlight segment Before the page, in response to the preview instruction for the first target highlight segment, the preview page corresponding to the first target highlight segment is displayed to play the first target highlight segment.
  • Terminal devices in the embodiments of the present disclosure may include, but are not limited to, mobile phones, notebook computers, digital broadcast receivers, PDAs (personal digital assistants), PADs (tablets), PMPs (portable multimedia players), vehicle-mounted terminals (e.g. Mobile terminals such as car navigation terminals) and fixed terminals such as digital TVs, desktop computers, etc.
  • the electronic device shown in FIG. 5 is only an example, and should not bring any limitation to the function and scope of use of the embodiments of the present disclosure.
  • the electronic device 500 may include a processing device (such as a central processing unit, a graphics processor, etc.) 501, which can be loaded into a random access device according to a program stored in a read-only memory (ROM) 502 or from a storage device 508.
  • the program in the memory (RAM) 503 executes various appropriate actions and processing.
  • various programs and data required for the operation of the electronic device 500 are also stored.
  • the processing device 501, the ROM 502, and the RAM 503 are connected to each other through a bus 504.
  • An input/output (I/O) interface 505 is also connected to the bus 504.
  • the following devices can be connected to the I/O interface 505: including input devices 506 such as touch screens, touch pads, keyboards, mice, cameras, microphones, accelerometers, gyroscopes, etc.; including, for example, liquid crystal displays (LCD), speakers, vibrations
  • input devices 506 such as touch screens, touch pads, keyboards, mice, cameras, microphones, accelerometers, gyroscopes, etc.
  • LCD liquid crystal displays
  • An output device 507 such as a device
  • a storage device 508 such as a magnetic tape, a hard disk, etc.
  • the communication device 509 may allow the electronic device 500 to perform wireless or wired communication with other devices to exchange data.
  • FIG. 5 shows an electronic device 500 having various devices, it should be understood that it is not required to implement or have all of the illustrated devices. It may be implemented alternatively or provided with more or fewer devices.
  • the process described above with reference to the flowchart can be implemented as a computer software program.
  • the embodiments of the present disclosure include a computer program product, which includes a computer program carried on a non-transitory computer readable medium, and the computer program contains program code for executing the method shown in the flowchart.
  • the computer program may be downloaded and installed from the network through the communication device 509, or installed from the storage device 508, or installed from the ROM 502.
  • the steps in the method of the embodiment of the present disclosure are executed to realize the above-mentioned functions of the embodiment of the present disclosure.
  • the present disclosure also provides a computer program that is stored in a readable storage medium, and at least one processor of an electronic device can read the computer program from the readable storage medium.
  • a computer program where the at least one processor executes the computer program, so that the electronic device executes the solution provided in any of the foregoing embodiments.
  • the above-mentioned computer-readable medium in the present disclosure may be a computer-readable signal medium or a computer-readable storage medium, or any combination of the two.
  • the computer-readable storage medium may be, for example, but not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, device, or device, or a combination of any of the above.
  • Computer-readable storage media may include, but are not limited to: electrical connections with one or more wires, portable computer disks, hard disks, random access memory (RAM), read-only memory (ROM), erasable removable Programmable read-only memory (EPROM or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), optical storage device, magnetic storage device, or any suitable combination of the above.
  • a computer-readable storage medium may be any tangible medium that contains or stores a program, and the program may be used by or in combination with an instruction execution system, apparatus, or device.
  • a computer-readable signal medium may include a data signal propagated in a baseband or as a part of a carrier wave, and a computer-readable program code is carried therein.
  • This propagated data signal can take many forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the foregoing.
  • the computer-readable signal medium may also be any computer-readable medium other than the computer-readable storage medium.
  • the computer-readable signal medium may send, propagate, or transmit the program for use by or in combination with the instruction execution system, apparatus, or device .
  • the program code contained on the computer-readable medium can be transmitted by any suitable medium, including but not limited to: wire, optical cable, RF (Radio Frequency), etc., or any suitable combination of the above.
  • the client can communicate with any currently known or future-developed network protocol such as HTTP (HyperText Transfer Protocol), and can communicate with any form or medium of digital data (
  • communication networks are interconnected.
  • Examples of communication networks include local area networks (LAN), wide area networks (WAN), the Internet (for example, the Internet), and end-to-end networks (for example, ad hoc end-to-end networks), and any networks currently known or developed in the future.
  • the above-mentioned computer-readable medium may be included in the above-mentioned electronic device; or it may exist alone without being assembled into the electronic device.
  • the aforementioned computer-readable medium carries one or more programs, and when the aforementioned one or more programs are executed by the electronic device, the electronic device: obtains the first highlight segment obtained by performing highlight recognition on the target video;
  • the editing instruction of the first target highlight segment displays the editing page corresponding to the first target highlight segment.
  • the editing page is used by the user to perform editing operations on the first target highlight segment;
  • the first target highlight segment is processed to obtain a second highlight segment; in response to receiving a release instruction for the second highlight segment, the second highlight segment is released.
  • the computer program code used to perform the operations of the present disclosure can be written in one or more programming languages or a combination thereof.
  • the above-mentioned programming languages include but are not limited to object-oriented programming languages such as Java, Smalltalk, C++, It also includes conventional procedural programming languages-such as "C" language or similar programming languages.
  • the program code can be executed entirely on the user's computer, partly on the user's computer, executed as an independent software package, partly on the user's computer and partly executed on a remote computer, or entirely executed on the remote computer or server.
  • the remote computer can be connected to the user’s computer through any kind of network, including a local area network (LAN) or a wide area network (WAN), or it can be connected to an external computer (for example, using an Internet service provider to Connect via the Internet).
  • LAN local area network
  • WAN wide area network
  • each block in the flowchart or block diagram can represent a module, program segment, or part of code, and the module, program segment, or part of code contains one or more for realizing the specified logic function.
  • Executable instructions can also occur in a different order from the order marked in the drawings. For example, two blocks shown one after another can actually be executed substantially in parallel, and they can sometimes be executed in the reverse order, depending on the functions involved.
  • each block in the block diagram and/or flowchart, and the combination of the blocks in the block diagram and/or flowchart can be implemented by a dedicated hardware-based system that performs the specified functions or operations Or it can be realized by a combination of dedicated hardware and computer instructions.
  • the modules involved in the embodiments described in the present disclosure can be implemented in software or hardware. Wherein, the name of the module does not constitute a limitation on the module itself under certain circumstances.
  • the acquisition module can also be described as the "first highlight segment acquisition module”.
  • exemplary types of hardware logic components include: Field Programmable Gate Array (FPGA), Application Specific Integrated Circuit (ASIC), Application Specific Standard Product (ASSP), System on Chip (SOC), Complex Programmable Logical device (CPLD) and so on.
  • FPGA Field Programmable Gate Array
  • ASIC Application Specific Integrated Circuit
  • ASSP Application Specific Standard Product
  • SOC System on Chip
  • CPLD Complex Programmable Logical device
  • a machine-readable medium may be a tangible medium, which may contain or store a program for use by the instruction execution system, apparatus, or device or in combination with the instruction execution system, apparatus, or device.
  • the machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium.
  • the machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, device, or device, or any suitable combination of the foregoing.
  • machine-readable storage media would include electrical connections based on one or more wires, portable computer disks, hard disks, random access memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM or flash memory), optical fiber, portable compact disk read only memory (CD-ROM), optical storage device, magnetic storage device, or any suitable combination of the foregoing.
  • RAM random access memory
  • ROM read-only memory
  • EPROM or flash memory erasable programmable read-only memory
  • CD-ROM compact disk read only memory
  • magnetic storage device or any suitable combination of the foregoing.
  • Example 1 provides a video processing method.
  • the method includes: acquiring a first highlight segment obtained by performing highlight recognition on a target video;
  • the editing instruction displays the editing page corresponding to the first target highlight segment, where the editing page is used by the user to perform an editing operation on the first target highlight segment; according to the editing operation of the user, perform an editing operation on the first target highlight segment Processing is performed to obtain a second highlight segment; in response to receiving a release instruction for the second highlight segment, the second highlight segment is released.
  • Example 2 provides the method of Example 1. After the step of obtaining the first highlight segment obtained by performing highlight recognition on the target video, the method further includes: displaying video playback Page, the video playback page carries tag information of the first highlight segment, and the tag information is used to characterize the feature information of the first highlight segment.
  • Example 3 provides the method of Example 2, and the tag information includes one or more of the following: a cover image, a matched highlight recognition condition, in the target video The time information, the release information used to indicate whether it has been released.
  • Example 4 provides the method of Example 1.
  • the first target highlight segment includes a plurality of sub-highlight segments in the first highlight segment; Before displaying the editing page corresponding to the first target highlight segment, the method further includes: according to the user's selection operation, combining the multiple sub-highlight segments to obtain the first highlight segment editing instructions. Target highlight segment.
  • Example 5 provides the method of Example 1.
  • the first target highlight segment includes a plurality of sub-highlight segments in the first highlight segment;
  • Processing the first target highlight segment to obtain a second highlight segment includes: processing the multiple sub-highlight segments according to the editing operation of the user, and combining the processed multiple sub-highlight segments The highlight segments are merged into the second highlight segment.
  • Example 6 provides the method of Example 1, and the method further includes: in response to an editing instruction for the second target highlight segment, displaying the edit page corresponding to the second target highlight segment , Wherein the second target highlight segment is any highlight segment that has been published, and the editing page is used for the user to perform editing operations on the second target highlight segment; according to the user’s editing operations, the second target highlight segment The target highlight segment is processed to obtain a third highlight segment; in response to receiving a release instruction for the third highlight segment, the third highlight segment is issued.
  • Example 7 provides the method of Example 1.
  • the obtaining a first highlight segment obtained by performing highlight recognition on a target video includes: obtaining the target video; Perform highlight recognition to obtain the first highlight segment; or send the target video to a server so that the server performs highlight recognition on the target video; obtain the first highlight segment according to the highlight recognition result of the server Highlight fragments.
  • Example 8 provides the method of Example 1, and the method further includes: in the process of obtaining the first highlight segment, displaying prompt information, the prompt information being used to prompt the The user is performing highlight recognition on the target video.
  • Example 9 provides the method of Example 1. Before displaying the edit page corresponding to the first target highlight segment in response to the editing instruction for the first target highlight segment, The method further includes: in response to a preview instruction for the first target highlight segment, displaying a preview page corresponding to the first target highlight segment to play the first target highlight segment.
  • Example 10 provides the method of any one of Examples 1 to 9, and the target video is a live playback video.
  • Example 11 provides a video processing device, the device includes: an acquisition module configured to acquire a first highlight segment obtained by performing highlight recognition on a target video; a first display module , For displaying the editing page corresponding to the first target highlight segment in response to the editing instruction for the first target highlight segment, and the editing page is used for the user to perform editing operations on the first target highlight segment; the first processing module , For processing the first target highlight segment according to the user's editing operation to obtain a second highlight segment; a first publishing module, for responding to receiving a publishing instruction for the second highlight segment , Publish the second highlight segment.
  • Example 12 provides a computer-readable medium having a computer program stored thereon, and when the program is executed by a processing device, the steps of the methods described in Examples 1 to 10 are implemented.
  • Example 13 provides an electronic device, including: a storage device on which a computer program is stored; and a processing device for executing the computer program in the storage device to Implement the steps of the methods described in Example 1 to Example 10.
  • Example 14 provides a computer program product, including computer program instructions that cause a computer to execute the steps of the method described in any one of Examples 1 to 10.
  • Example 15 provides a computer program that causes a computer to execute the steps of the method described in any one of Examples 1 to 10.

Abstract

本公开涉及一种视频处理方法、装置、可读介质及电子设备。该方法包括:获取对目标视频进行高光识别而得到的第一高光片段;响应于针对第一目标高光片段的编辑指令,展示第一目标高光片段对应的编辑页面,该编辑页面用于用户对第一目标高光片段实施编辑操作;根据用户的编辑操作,对第一目标高光片段进行处理,以获得第二高光片段;响应于接收到针对第二高光片段的发布指令,发布第二高光片段。这样,可以使得用户编辑出符合自身需求的高光片段并进行发布,贴合用户的需求,提升用户体验。此外,高光片段发布之后,观众可以直接观看该高光片段,即目标视频中的精彩片段,便于观众对视频的分享,提升视频的观看体验。

Description

视频处理方法、装置、可读介质及电子设备 技术领域
本公开涉及视频技术领域,具体地,涉及一种视频处理方法、装置、可读介质及电子设备。
背景技术
随着互联网技术的发展,例如短视频平台、直播平台等视频平台的用户不断增多。由于视频展示内容的丰富性和良好的视听体验,观看视频成为人们日常娱乐活动的一部分。
然而,人们在观看视频过程中,如果视频内容较多,时间较长,用户可能没有兴趣完整地观看较为冗长的视频,不利于视频的分享,也不利于用户之间的互动,因此用户体验较差。
发明内容
提供该发明内容部分以便以简要的形式介绍构思,这些构思将在后面的具体实施方式部分被详细描述。该发明内容部分并不旨在标识要求保护的技术方案的关键特征或必要特征,也不旨在用于限制所要求的保护的技术方案的范围。
第一方面,本公开提供一种视频处理方法,所述方法包括:获取对目标视频进行高光识别而得到的第一高光片段;响应于针对第一目标高光片段的编辑指令,展示所述第一目标高光片段对应的编辑页面,该编辑页面用于用户对所述第一目标高光片段实施编辑操作;根据所述用户的编辑操作,对所述第一目标高光片段进行处理,以获得第二高光片段;响应于接收到针对所述第二高光片段的发布指令,发布所述第二高光片段。
第二方面,本公开提供一种视频处理装置,所述装置包括:获取模块,用于获取对目标视频进行高光识别而得到的第一高光片段;第一展示模块,用于响应于针对第一目标高光片段的编辑指令,展示所述第一目标高光片段对应的编辑页面,该编辑页面用于用户对所述第一目标高光片段实施编辑操作;第一处理模块,用于根据所述用户的编辑操作,对所述第一目标高光片段进行处理,以获得第二高光片段;第一发布模块,用于响应于接收到针对所述第二高光片段的发布指令,发布所述第二高光片段。
第三方面,本公开提供一种计算机可读介质,其上存储有计算机程序,该程序被处理装置执行时实现本公开第一方面提供的所述方法的步骤。
第四方面,本公开提供一种电子设备,包括:存储装置,其上存储有计算机程序;处理装置,用于执行所述存储装置中的所述计算机程序,以实现本公开第一方面提供的所述方法的步骤。
第五方面,本公开提供了一种计算机程序产品,包括:计算机程序,所述计算机程序存储在可读存储介质中,电子设备的至少一个处理器可以从所述可读存储介质读取所述计算机程序,所述至少一个处理器执行所述计算机程序,使得所述电子设备执行本公开第一方面提供的所述方法的步骤。
第六方面,本公开提供了一种计算机程序,所述计算机程序存储在可读存储介质中,电子设备的至少一个处理器可以从所述可读存储介质中读取所述计算机程序,所述至少一个处理器执行所述计算机程序,使得所述电子设备执行本公开第一方面提供的所述方法的步骤。
通过上述技术方案,首先获取对目标视频进行高光识别而得到的第一高光片段,该第一高光片段可以是目标视频中的精彩片段。并且,终端可以提供编辑页面以用于用户对第一目标高光片段实施编辑操作。这样,可以使得用户编辑出符合自身需求的高光片段并进行发布,贴合用户的需求,提升用户体验。此外,高光片段发布之后,观众可以直接观看该高光片段,即目标视频中的精彩片段,便于观众对视频的分享,提升视频的观看体验。
本公开的其他特征和优点将在随后的具体实施方式部分予以详细说明。
附图说明
结合附图并参考以下具体实施方式,本公开各实施例的上述和其他特征、优点及方面将变得更加明显。贯穿附图中,相同或相似的附图标记表示相同或相似的元素。应当理解,附图是示意性的,原件和元素不一定按照比例绘制。在附图中:
图1是根据一示例性实施例示出的一种视频处理方法的流程图。
图2a是根据一示例性实施例示出的一种视频回放页面的示意图。
图2b是根据另一示例性实施例示出的一种视频回放页面的示意图。
图2c是根据一示例性实施例示出的一种预览页面的示意图。
图2d是根据一示例性实施例示出的一种片段裁剪页面的示意图。
图2e是根据一示例性实施例示出的一种效果处理页面的示意图。
图2f是根据一示例性实施例示出的一种待发布页面的示意图。
图3a是根据另一示例性实施例示出的一种视频回放页面的示意图。
图3b是根据另一示例性实施例示出的一种预览页面的示意图。
图4是根据一示例性实施例示出的一种视频处理装置的框图。
图5是根据一示例性实施例示出的一种电子设备的结构示意图。
具体实施方式
下面将参照附图更详细地描述本公开的实施例。虽然附图中显示了本公开的某些实施例,然而应当理解的是,本公开可以通过各种形式来实现,而且不应该被解释为限于这里阐述的实施例,相反提供这些实施例是为了更加透彻和完整地理解本公开。应当理解的是,本公开的附图及实施例仅用于示例性作用,并非用于限制本公开的保护范围。
应当理解,本公开的方法实施方式中记载的各个步骤可以按照不同的顺序执行,和/或并行执行。此外,方法实施方式可以包括附加的步骤和/或省略执行示出的步骤。本公开的范围在此方面不受限制。
本文使用的术语“包括”及其变形是开放性包括,即“包括但不限于”。术语“基于”是“至少部分地基于”。术语“一个实施例”表示“至少一个实施例”;术语“另一实施例”表示“至少一个另外的实施例”;术语“一些实施例”表示“至少一些实施例”。其他术语的相关定义将在下文描述中给出。
需要注意,本公开中提及的“第一”、“第二”等概念仅用于对不同的装置、模块或单元进行区分,并非用于限定这些装置、模块或单元所执行的功能的顺序或者相互依存关系。
需要注意,本公开中提及的“一个”、“多个”的修饰是示意性而非限制性的,本领域技术人员应当理解,除非在上下文另有明确指出,否则应该理解为“一个或多个”。
本公开实施方式中的多个装置之间所交互的消息或者信息的名称仅用于说明性的目的,而并不是用于对这些消息或信息的范围进行限制。
图1是根据一示例性实施例示出的一种视频处理方法的流程图,该方法可以应用于终端,如智能手机、平板电脑、个人计算机(PC)、笔记本电脑等终端设备。如图1所示,该方法可以包括S101~S104。
在S101中,获取对目标视频进行高光识别而得到的第一高光片段。
目标视频可以是用户拍摄的视频。示例地,目标视频可以是用户通过终端拍摄的短视频,也可以是用户在直播平台进行直播过程中录制的直播回放视频。值得说明的是,本公开在以下介绍中以目标视频为直播回放视频为例进行举例说明,但并不构成对本公开实施方式的限制。
在直播过程中,观众可通过评论、送礼物等方式进行互动,因此目标视频中可能会有 气氛较为活跃的精彩时刻。对目标视频进行高光识别指的是识别目标视频中的精彩时刻,进行高光识别得到的第一高光片段可以是目标视频中的精彩片段。其中,对目标视频进行高光识别可以识别出一个或多个精彩片段,相应地,终端获取到的第一高光片段可包括一个或多个子高光片段,其中,每一子高光片段均为一段独立的视频片段。
在S102中,响应于针对第一目标高光片段的编辑指令,展示第一目标高光片段对应的编辑页面。
第一目标高光片段可以为第一高光片段中的任一子高光片段,也可包括从第一高光片段的多个子高光片段中选取出的多个子高光片段,本公开对此不进行限定。用户可以根据自身需求选择第一目标高光片段进行编辑。终端可提供对第一目标高光片段进行编辑的入口,示例地,该入口可通过编辑按钮的形式呈现,当用户操作该入口,例如点击该编辑按钮,便可产生针对第一目标高光片段的编辑指令。终端在接收到用户输入的该编辑指令后,可以展示第一目标高光片段对应的编辑页面,该编辑页面可以用于用户对第一目标高光片段实施编辑操作。
在S103中,根据用户的编辑操作,对第一目标高光片段进行处理,以获得第二高光片段。
示例地,用户的编辑操作可以是对该第一目标高光片段进行裁剪和/或效果处理等操作,其中,效果处理例如包括添加特效、添加贴纸、添加音乐等操作。终端根据用户的编辑操作,对第一目标高光片段做相应处理,可以得到用户编辑后的第二高光片段。
在S104中,响应于接收到针对第二高光片段的发布指令,发布第二高光片段。
其中,终端可提供用于发布该第二高光片段的入口,示例地,该入口的呈现形式可以为一发布按钮,当用户操作该入口,例如点击该发布按钮,终端可接收到用户输入的针对第二高光片段的发布指令,之后可将该第二高光片段发布,例如发布到直播平台、朋友圈或其他视频平台。
通过上述技术方案,首先获取对目标视频进行高光识别而得到的第一高光片段,该第一高光片段可以是目标视频中的精彩片段。并且,终端可以提供编辑页面以用于用户对第一目标高光片段实施编辑操作。这样,可以使得用户编辑出符合自身需求的高光片段并进行发布,贴合用户的需求,提升用户体验。此外,高光片段发布之后,观众可以直接观看该高光片段,即目标视频中的精彩片段,便于观众对视频的分享,提升视频的观看体验。
本公开中,对目标视频进行高光识别可以在终端本地进行,也可在服务器进行。基于此,终端可通过如下两种实施方式获取对目标视频进行高光识别而得到的第一高光片段。
在由终端本地进行识别的实施方式中,终端可首先获取目标视频,例如获取用户在直播过程中录制的直播回放视频。获取到该目标视频后,终端自身可对该目标视频进行高光识别,以得到第一高光片段。
在由服务器进行识别的实施方式中,终端获取到目标视频后,可向服务器发送该目标视频,以由服务器对目标视频进行高光识别。之后,服务器可返回对目标视频的高光识别结果。终端可根据服务器的高光识别结果,获取第一高光片段。
其中,终端或服务器可通过多种方式对目标视频进行高光识别。在一实施例中,终端或服务器可根据预设的多个高光识别条件对目标视频进行高光识别,以得到高光识别结果。其中,预设的高光识别条件可以是观众数量最多、评论数量最多、收到礼物数量最多,等等。在另一实施例中,终端或服务器可将目标视频输入至预先训练的高光识别模型中,该高光识别模型可输出高光识别结果。
示例地,高光识别结果可以包括第一高光片段中各个子高光片段在目标视频中对应的时段信息。例如目标视频共60min,各个子高光片段在目标视频中对应的时段信息分别可以是42:01~42:57、45:03~45:56、50:04~50:57等等。终端可根据该时段信息从目标视频中提取出各个子高光片段。并且,可选地,高光识别结果中还可包括各个子高光片段各自符合的高光识别条件。其中,如果终端或服务器是根据预设的多个高光识别条件进行识别的,则各个子高光片段各自符合的高光识别条件可以是进行识别时分别对应的预设的高光识别条件。如果终端或服务器是通过高光识别模型进行识别的,该高光识别模型除了可以输出各个子高光片段分别在目标视频中对应的时段信息外,还可输出该子高光片段符合的高光识别条件。
例如,在目标视频中对应的时段信息为42:01~42:57的子高光片段是根据直播过程中观众数量最多而识别出的,则该子高光片段符合的高光识别条件可以为观众数量最多。在目标视频中对应的时段信息为45:03~45:56的子高光片段是根据直播过程中评论数量最多而识别出的,则该子高光片段符合的高光识别条件可以为评论数量最多。在目标视频中对应的时段信息为50:04~50:57的子高光片段是根据直播过程中收到礼物数量最多而识别出的,则该子高光片段符合的高光识别条件可以为礼物数量最多。
本公开中,在获取第一高光片段的过程中,终端可显示提示信息,该提示信息可以用于提示用户正在对目标视频进行高光识别。在一种实施方式中,该提示信息可在页面的预设位置处(如页面上方)以提示框的形式显示,提示内容例如可以为“精彩片段正在生成中”。在另一实施方式中,该提示信息可在视频回放页面中显示,其中,视频回放页面可 提供不同直播场次的高光片段识别结果。该视频回放页面可如图2a所示,在图2a中,对于今日20:00场的直播回放视频,终端在获取第一高光片段的过程中,可在对应直播场次显示“精彩片段正在生成中”的提示信息。
此外,对于直播内容较少,时间较短,或者观众活跃度不高的直播回放视频,可能未识别出精彩片段,这样便无法获取到第一高光片段。此时终端可对用户进行提示,例如可显示用于提示用户未从目标视频中识别出高光片段的提示信息。如图2a所示,在昨日20:00场的直播回放视频中,可能由于该场直播观众互动较少,未获取到第一高光片段,可在该直播场次显示未识别出高光片段的提示信息。
在上述S101之后,即终端在获取对目标视频进行高光识别而得到的第一高光片段之后,本公开提供的视频处理方法还可包括:
展示视频回放页面,该视频回放页面上可承载有第一高光片段的标签信息,该标签信息可用于表征第一高光片段的特征信息。
其中,该视频回放页面可如图2b所示,对于今日20:00场的直播回放视频,终端可获取到第一高光片段,该第一高光片段可包括多个子高光片段,例如分别为子高光片段201、子高光片段202、子高光片段203,以及其他子高光片段。由于终端屏幕尺寸的限制,未展示出所有的子高光片段,用户可通过左右滑动以使终端显示不同的子高光片段。值得说明的是,本公开对终端获取到的第一高光片段中子高光片段的数量不做具体限制。
如图2b所示的视频回放页面上可承载有第一高光片段的标签信息。该标签信息可用于表征第一高光片段的特征信息。示例地,该标签信息可包括以下中的一项或多项:封面图像、所符合的高光识别条件、在目标视频中的时间信息、用于指示是否已被发布的发布信息。其中,第一高光片段的标签信息可包括该第一高光片段中多个子高光片段分别对应的标签信息,每一子高光片段分别对应的标签信息可用于表征该子高光片段的特征信息。
其中,封面图像可以是子高光片段中的任一图像帧,本公开不做具体限定。例如,可以默认使用子高光片段中的第一帧图像作为封面图像。
所符合的高光识别条件可以是从目标视频中识别出各子高光片段的原因,例如,目标视频中观众数量最多、收到礼物数量最多、评论数量最多等等。示例地,如图2b所示,子高光片段201的高光识别条件可以是观众数量最多,其标签信息可包括“最多观众”。子高光片段202的高光识别条件可以是评论数量最多,其标签信息可包括“最热闹”。子高光片段203的高光识别条件可以是收到礼物数量最多,其标签信息可包括“礼物送不停”。
在目标视频中的时间信息例如可以是子高光片段在目标视频中的第几分钟。示例地,子高光片段201在目标视频中的时间信息可以是第42分钟。当然,本公开对该时间信息的表现形式以及各子高光片段的时长不做具体限制,该时间信息例如也可以为子高光片段在目标视频中的时段信息。
用于指示是否已被发布的发布信息可以包括发布过和未发布。如果用户已将子高光片段发布,则该发布信息可以指示该子高光片段已发布过;如果用户还未将子高光片段发布,则该发布信息可以指示该子高光片段还未发布。示例地,图2b所示的子高光片段201~203的发布信息均为未发布。
本公开中,第一目标高光片段可包括第一高光片段中的多个子高光片段,在S102中响应于针对第一目标高光片段的编辑指令,展示第一目标高光片段对应的编辑页面之前,本公开提供的视频处理方法还可包括:
根据用户的选择操作,将多个子高光片段进行合并,以获得第一目标高光片段。
其中,用户可根据自身需求选择将第一高光片段中的哪些子高光片段作为第一目标高光片段。例如用户选择了子高光片段201和子高光片段202,终端可根据用户的选择操作,将用户选择的多个子高光片段进行合并,以获得第一目标高光片段。在合并时,对于各子高光片段在第一目标高光片段中的顺序,本公开不做具体限制。示例地,合并后的第一目标高光片段中,子高光片段201可在子高光片段202之前也可在其之后。
本公开中,用户在对第一目标高光片段进行编辑之前,可首先对该第一目标高光片段进行预览。因此,在S102之前,本公开提供的视频处理方法还可包括:
响应于针对第一目标高光片段的预览指令,展示第一目标高光片段对应的预览页面,以播放第一目标高光片段。
示例地,用户可点击针对第一目标高光片段的预览按钮,或者点击第一目标高光片段的封面图像,便可产生针对第一目标高光片段的预览指令。终端接收到该预览指令之后,可以展示第一目标高光片段对应的预览页面,以播放该第一目标高光片段。
其中,第一目标高光片段也可以为第一高光片段中的任一子高光片段。示例地,以图2b中子高光片段201作为第一目标高光片段为例,用户可点击子高光片段201的封面图像,终端在接收到针对该第一目标高光片段的预览指令后,可展示如图2c所示的预览页面。该预览页面可用于播放第一目标高光片段,其中,区域210可用于播放高光片段。用户可拖动页面下方的进度条以控制第一目标高光片段的播放。该预览页面中也可包括第一目标高光片段的标签信息。并且,用户可点击返回按钮返回到上一页面,即图2b所示的 视频回放页面,也可点击删除按钮以删除该第一目标高光片段。
此外,预览页面中还可提供用于编辑第一目标高光片段的编辑入口,示例地,该编辑入口可通过编辑按钮的形式呈现,如图2c中的编辑按钮204。当用户操作该编辑入口,例如点击该编辑按钮204,便可产生针对第一目标高光片段的编辑指令。其中,用户对高光片段的编辑操作可包括片段裁剪和/或效果处理,因此,第一目标高光片段对应的编辑页面可包括片段裁剪页面和效果处理页面。示例地,终端在接收到该编辑指令后,可以首先展示如图2d所示的片段裁剪页面。在该裁剪页面中,用户可对第一目标高光片段进行裁剪,例如可通过拖动图2d中的滑块205进行裁剪。
在裁剪完成之后,用户可点击图2d中的下一步按钮,以进入另一编辑页面,例如,终端可展示如图2e所示的效果处理页面。在该效果处理页面中,按钮206~209分别可表示不同的处理效果。示例地,按钮206可用于为第一目标高光片段添加音乐,按钮207可用于为第一目标高光片段添加特效,按钮208可用于为第一目标高光片段添加文本内容,按钮209可用于为第一目标高光片段添加贴纸。值得说明的是,这四个按钮所表示的效果处理仅为示例性解释说明,本公开中对高光片段的效果处理并不局限于这四种效果。
终端可根据用户对第一目标高光片段的片段裁剪和效果处理等编辑操作,对第一目标高光片段进行处理,以获得用户编辑后的第二高光片段。
在编辑完成之后,用户可点击图2e中的下一步按钮,终端可展示用于发布第二高光片段的待发布页面,该待发布页面可如图2f所示。在该待发布页面中,用户可添加要发布的第二高光片段的标题以及话题等内容。该待发布页面中还可提供用于发布第二高光片段的发布入口,示例地,该发布入口可通过发布按钮的形式呈现,如图2f中的发布按钮。当用户操作该发布入口,例如点击该发布按钮,终端可接收到用户输入的针对第二高光片段的发布指令,之后可以发布用户编辑后的第二高光片段。
发布该第二高光片段之后,用户返回视频回放页面时,子高光片段201即更新为用户编辑后的该第二高光片段,并且,用于指示是否将已被发布的发布信息更新为已发布。之后,用户可将子高光片段202、子高光片段203以及其他子高光片段或多个子高光片段作为第一目标高光片段,进行编辑并发布。
本公开中,在第一目标高光片段包括第一高光片段中的多个子高光片段的情况下,在S103中根据用户的编辑操作,对第一目标高光片段进行处理,以获得第二高光片段的步骤可包括:
根据用户的编辑操作,对多个子高光片段进行处理,并将处理后的所述多个子高光片 段合并为第二高光片段。
其中,用户可根据自身需求选择对第一高光片段中的哪些子高光片段进行编辑,终端可根据用户的编辑操作,对这多个子高光片段进行处理,并将处理后的多个子高光片段进行合并,合并为第二高光片段。该第二高光片段可作为用户编辑后的高光片段的集锦。在合并时,本公开对各处理后的子高光片段在合并后的第二高光片段中的顺序不做具体限制。
图3a是根据另一示例性实施例示出的一种视频回放页面的示意图。如图3a所示,发布过的高光片段301可对应子高光片段201,发布过的高光片段302可对应子高光片段202,发布过的高光片段303可对应子高光片段203。
本公开中,对于已发布过的高光片段,仍然可以进行再次编辑并发布。该视频处理方法还可包括:
响应于针对第二目标高光片段的编辑指令,展示第二目标高光片段对应的编辑页面;根据用户的编辑操作,对第二目标高光片段进行处理,以获得第三高光片段;响应于接收到针对第三高光片段的发布指令,发布第三高光片段。
其中,第二目标高光片段可以为已发布的任一高光片段。示例地,例如将发布过的高光片段301作为第二目标高光片段。用户点击高光片段301的封面图像,可产生针对该第二目标高光片段的预览指令。终端在接收到该预览指令后,可展示如图3b所示的预览页面。在该预览页面中,可提供再次编辑入口,例如该预览页面中的再次编辑按钮304,其中,再次编辑指的是对子高光片段201的再次编辑。用户可操作该再次编辑入口,例如点击该再次编辑按钮304,可产生针对第二目标高光片段的编辑指令,终端在接收到该编辑指令后,可展示第二目标高光片段对应的编辑页面,该编辑页面可用于用户对第二目标高光片段实施编辑操作。之后,根据用户的编辑操作,对第二目标高光片段进行处理,以获得第三高光片段。其中,编辑页面也可包括片段裁剪页面和效果处理页面,可如图2d和图2e所示,此处不再赘述。之后,终端在接收到用户输入的针对该第三高光片段的发布指令后,可将该第三高光片段发布。
通过上述技术方案,对于已经发布过的高光片段,用户仍然可以对其进行编辑和发布,能够进一步满足用户的需求,提升用户体验。
需要说明的是,本公开提供的附图中各个页面的表现形式仅为示例性解释说明,并不构成对本公开的限制,在实际应用中,页面的表现形式并不局限于此。
基于同一发明构思,本公开还提供一种视频处理装置,图4是根据一示例性实施例示 出的一种视频处理装置的框图。如图4所示,该视频处理装置400可包括:
获取模块401,用于获取对目标视频进行高光识别而得到的第一高光片段;
第一展示模块402,用于响应于针对第一目标高光片段的编辑指令,展示所述第一目标高光片段对应的编辑页面,该编辑页面用于用户对所述第一目标高光片段实施编辑操作;
第一处理模块403,用于根据所述用户的编辑操作,对所述第一目标高光片段进行处理,以获得第二高光片段;
第一发布模块404,用于响应于接收到针对所述第二高光片段的发布指令,发布所述第二高光片段。
通过上述技术方案,首先获取对目标视频进行高光识别而得到的第一高光片段,该第一高光片段可以是目标视频中的精彩片段。并且,终端可以提供编辑页面以用于用户对第一目标高光片段实施编辑操作。这样,可以使得用户编辑出符合自身需求的高光片段并进行发布,贴合用户的需求,提升用户体验。此外,高光片段发布之后,观众可以直接观看该高光片段,即目标视频中的精彩片段,便于观众对视频的分享,提升视频的观看体验。
可选地,所述装置400还可包括:第二展示模块,用于在所述获取模块401获取对目标视频进行高光识别而得到的第一高光片段之后,展示视频回放页面,所述视频回放页面上承载有所述第一高光片段的标签信息,所述标签信息用于表征所述第一高光片段的特征信息。
可选地,所述第一目标高光片段包括所述第一高光片段中的多个子高光片段;所述装置400还可包括:合并模块,用于在所述第一展示模块402响应于针对第一目标高光片段的编辑指令,展示所述第一目标高光片段对应的编辑页面之前,根据所述用户的选择操作,将所述多个子高光片段进行合并,以获得所述第一目标高光片段。
可选地,所述第一目标高光片段包括所述第一高光片段中的多个子高光片段;所述第一处理模块403可用于根据所述用户的所述编辑操作,对所述多个子高光片段进行处理,并将处理后的所述多个子高光片段合并为所述第二高光片段。
可选地,所述装置400还可包括:第三展示模块,用于响应于针对第二目标高光片段的编辑指令,展示所述第二目标高光片段对应的编辑页面,其中,所述第二目标高光片段为已发布的任一高光片段,该编辑页面用于用户对所述第二目标高光片段实施编辑操作;第二处理模块,用于根据所述用户的编辑操作,对所述第二目标高光片段进行处理,以获得第三高光片段;第二发布模块,用于响应于接收到针对所述第三高光片段的发布指令, 发布所述第三高光片段。
可选地,所述获取模块401用于获取所述目标视频;对所述目标视频进行高光识别,得到所述第一高光片段;或者,向服务器发送所述目标视频,以由所述服务器对所述目标视频进行高光识别;根据所述服务器的高光识别结果,获取所述第一高光片段。
可选地,所述装置400还可包括:提示模块,用于在所述获取模块401获取所述第一高光片段的过程中,显示提示信息,所述提示信息用于提示所述用户正在对所述目标视频进行高光识别。
可选地,所述装置400还可包括:第四展示模块,用于在所述第一展示模块402响应于针对第一目标高光片段的编辑指令,展示所述第一目标高光片段对应的编辑页面之前,响应于针对所述第一目标高光片段的预览指令,展示所述第一目标高光片段对应的预览页面,以播放所述第一目标高光片段。
下面参考图5,其示出了适于用来实现本公开实施例的电子设备500的结构示意图。本公开实施例中的终端设备可以包括但不限于诸如移动电话、笔记本电脑、数字广播接收器、PDA(个人数字助理)、PAD(平板电脑)、PMP(便携式多媒体播放器)、车载终端(例如车载导航终端)等等的移动终端以及诸如数字TV、台式计算机等等的固定终端。图5示出的电子设备仅仅是一个示例,不应对本公开实施例的功能和使用范围带来任何限制。
如图5所示,电子设备500可以包括处理装置(例如中央处理器、图形处理器等)501,其可以根据存储在只读存储器(ROM)502中的程序或者从存储装置508加载到随机访问存储器(RAM)503中的程序而执行各种适当的动作和处理。在RAM 503中,还存储有电子设备500操作所需的各种程序和数据。处理装置501、ROM 502以及RAM 503通过总线504彼此相连。输入/输出(I/O)接口505也连接至总线504。
通常,以下装置可以连接至I/O接口505:包括例如触摸屏、触摸板、键盘、鼠标、摄像头、麦克风、加速度计、陀螺仪等的输入装置506;包括例如液晶显示器(LCD)、扬声器、振动器等的输出装置507;包括例如磁带、硬盘等的存储装置508;以及通信装置509。通信装置509可以允许电子设备500与其他设备进行无线或有线通信以交换数据。虽然图5示出了具有各种装置的电子设备500,但是应理解的是,并不要求实施或具备所有示出的装置。可以替代地实施或具备更多或更少的装置。
特别地,根据本公开的实施例,上文参考流程图描述的过程可以被实现为计算机软件程序。例如,本公开的实施例包括一种计算机程序产品,其包括承载在非暂态计算机可读 介质上的计算机程序,该计算机程序包含用于执行流程图所示的方法的程序代码。在这样的实施例中,该计算机程序可以通过通信装置509从网络上被下载和安装,或者从存储装置508被安装,或者从ROM 502被安装。在该计算机程序被处理装置501执行时,执行本公开实施例的方法中的步骤,以实现本公开所述的实施例的上述功能。
根据本公开的一些实施例,本公开还提供了一种计算机程序,所述计算机程序存储在可读存储介质中,电子设备的至少一个处理器可以从所述可读存储介质中读取所述计算机程序,所述至少一个处理器执行所述计算机程序,使得所述电子设备执行上述任一实施例提供的方案。
需要说明的是,本公开上述的计算机可读介质可以是计算机可读信号介质或者计算机可读存储介质或者是上述两者的任意组合。计算机可读存储介质例如可以是——但不限于——电、磁、光、电磁、红外线、或半导体的系统、装置或器件,或者任意以上的组合。计算机可读存储介质的更具体的例子可以包括但不限于:具有一个或多个导线的电连接、便携式计算机磁盘、硬盘、随机访问存储器(RAM)、只读存储器(ROM)、可擦式可编程只读存储器(EPROM或闪存)、光纤、便携式紧凑磁盘只读存储器(CD-ROM)、光存储器件、磁存储器件、或者上述的任意合适的组合。在本公开中,计算机可读存储介质可以是任何包含或存储程序的有形介质,该程序可以被指令执行系统、装置或者器件使用或者与其结合使用。而在本公开中,计算机可读信号介质可以包括在基带中或者作为载波一部分传播的数据信号,其中承载了计算机可读的程序代码。这种传播的数据信号可以采用多种形式,包括但不限于电磁信号、光信号或上述的任意合适的组合。计算机可读信号介质还可以是计算机可读存储介质以外的任何计算机可读介质,该计算机可读信号介质可以发送、传播或者传输用于由指令执行系统、装置或者器件使用或者与其结合使用的程序。计算机可读介质上包含的程序代码可以用任何适当的介质传输,包括但不限于:电线、光缆、RF(射频)等等,或者上述的任意合适的组合。
在一些实施方式中,客户端可以利用诸如HTTP(HyperText Transfer Protocol,超文本传输协议)之类的任何当前已知或未来研发的网络协议进行通信,并且可以与任意形式或介质的数字数据通信(例如,通信网络)互连。通信网络的示例包括局域网(LAN)、广域网(WAN)、网际网(例如,互联网)以及端对端网络(例如,ad hoc端对端网络),以及任何当前已知或未来研发的网络。
上述计算机可读介质可以是上述电子设备中所包含的;也可以是单独存在,而未装配入该电子设备中。
上述计算机可读介质承载有一个或者多个程序,当上述一个或者多个程序被该电子设备执行时,使得该电子设备:获取对目标视频进行高光识别而得到的第一高光片段;响应于针对第一目标高光片段的编辑指令,展示所述第一目标高光片段对应的编辑页面,该编辑页面用于用户对所述第一目标高光片段实施编辑操作;根据所述用户的编辑操作,对所述第一目标高光片段进行处理,以获得第二高光片段;响应于接收到针对所述第二高光片段的发布指令,发布所述第二高光片段。
可以以一种或多种程序设计语言或其组合来编写用于执行本公开的操作的计算机程序代码,上述程序设计语言包括但不限于面向对象的程序设计语言——诸如Java、Smalltalk、C++,还包括常规的过程式程序设计语言——诸如“C”语言或类似的程序设计语言。程序代码可以完全地在用户计算机上执行、部分地在用户计算机上执行、作为一个独立的软件包执行、部分在用户计算机上部分在远程计算机上执行、或者完全在远程计算机或服务器上执行。在涉及远程计算机的情形中,远程计算机可以通过任意种类的网络——包括局域网(LAN)或广域网(WAN)——连接到用户计算机,或者,可以连接到外部计算机(例如利用因特网服务提供商来通过因特网连接)。
附图中的流程图和框图,图示了按照本公开各种实施例的系统、方法和计算机程序产品的可能实现的体系架构、功能和操作。在这点上,流程图或框图中的每个方框可以代表一个模块、程序段、或代码的一部分,该模块、程序段、或代码的一部分包含一个或多个用于实现规定的逻辑功能的可执行指令。也应当注意,在有些作为替换的实现中,方框中所标注的功能也可以以不同于附图中所标注的顺序发生。例如,两个接连地表示的方框实际上可以基本并行地执行,它们有时也可以按相反的顺序执行,这依所涉及的功能而定。也要注意的是,框图和/或流程图中的每个方框、以及框图和/或流程图中的方框的组合,可以用执行规定的功能或操作的专用的基于硬件的系统来实现,或者可以用专用硬件与计算机指令的组合来实现。
描述于本公开实施例中所涉及到的模块可以通过软件的方式实现,也可以通过硬件的方式来实现。其中,模块的名称在某种情况下并不构成对该模块本身的限定,例如,获取模块还可以被描述为“第一高光片段获取模块”。
本文中以上描述的功能可以至少部分地由一个或多个硬件逻辑部件来执行。例如,非限制性地,可以使用的示范类型的硬件逻辑部件包括:现场可编程门阵列(FPGA)、专用集成电路(ASIC)、专用标准产品(ASSP)、片上系统(SOC)、复杂可编程逻辑设备(CPLD)等等。
在本公开的上下文中,机器可读介质可以是有形的介质,其可以包含或存储以供指令执行系统、装置或设备使用或与指令执行系统、装置或设备结合地使用的程序。机器可读介质可以是机器可读信号介质或机器可读储存介质。机器可读介质可以包括但不限于电子的、磁性的、光学的、电磁的、红外的、或半导体系统、装置或设备,或者上述内容的任何合适组合。机器可读存储介质的更具体示例会包括基于一个或多个线的电气连接、便携式计算机盘、硬盘、随机存取存储器(RAM)、只读存储器(ROM)、可擦除可编程只读存储器(EPROM或快闪存储器)、光纤、便捷式紧凑盘只读存储器(CD-ROM)、光学储存设备、磁储存设备、或上述内容的任何合适组合。
根据本公开的一个或多个实施例,示例1提供了一种视频处理方法,所述方法包括:获取对目标视频进行高光识别而得到的第一高光片段;响应于针对第一目标高光片段的编辑指令,展示所述第一目标高光片段对应的编辑页面,该编辑页面用于用户对所述第一目标高光片段实施编辑操作;根据所述用户的编辑操作,对所述第一目标高光片段进行处理,以获得第二高光片段;响应于接收到针对所述第二高光片段的发布指令,发布所述第二高光片段。
根据本公开的一个或多个实施例,示例2提供了示例1的方法,在所述获取对目标视频进行高光识别而得到的第一高光片段的步骤之后,所述方法还包括:展示视频回放页面,所述视频回放页面上承载有所述第一高光片段的标签信息,所述标签信息用于表征所述第一高光片段的特征信息。
根据本公开的一个或多个实施例,示例3提供了示例2的方法,所述标签信息包括以下中的一项或多项:封面图像、所符合的高光识别条件、在所述目标视频中的时间信息、用于指示是否已被发布的发布信息。
根据本公开的一个或多个实施例,示例4提供了示例1的方法,所述第一目标高光片段包括所述第一高光片段中的多个子高光片段;在所述响应于针对第一目标高光片段的编辑指令,展示所述第一目标高光片段对应的编辑页面之前,所述方法还包括:根据所述用户的选择操作,将所述多个子高光片段进行合并,以获得所述第一目标高光片段。
根据本公开的一个或多个实施例,示例5提供了示例1的方法,所述第一目标高光片段包括所述第一高光片段中的多个子高光片段;所述根据所述用户的编辑操作,对所述第一目标高光片段进行处理,以获得第二高光片段,包括:根据所述用户的所述编辑操作,对所述多个子高光片段进行处理,并将处理后的所述多个子高光片段合并为所述第二高光片段。
根据本公开的一个或多个实施例,示例6提供了示例1的方法,所述方法还包括:响应于针对第二目标高光片段的编辑指令,展示所述第二目标高光片段对应的编辑页面,其中,所述第二目标高光片段为已发布的任一高光片段,该编辑页面用于用户对所述第二目标高光片段实施编辑操作;根据所述用户的编辑操作,对所述第二目标高光片段进行处理,以获得第三高光片段;响应于接收到针对所述第三高光片段的发布指令,发布所述第三高光片段。
根据本公开的一个或多个实施例,示例7提供了示例1的方法,所述获取对目标视频进行高光识别而得到的第一高光片段,包括:获取所述目标视频;对所述目标视频进行高光识别,得到所述第一高光片段;或者,向服务器发送所述目标视频,以由所述服务器对所述目标视频进行高光识别;根据所述服务器的高光识别结果,获取所述第一高光片段。
根据本公开的一个或多个实施例,示例8提供了示例1的方法,所述方法还包括:在获取所述第一高光片段的过程中,显示提示信息,所述提示信息用于提示所述用户正在对所述目标视频进行高光识别。
根据本公开的一个或多个实施例,示例9提供了示例1的方法,在所述响应于针对第一目标高光片段的编辑指令,展示所述第一目标高光片段对应的编辑页面之前,所述方法还包括:响应于针对所述第一目标高光片段的预览指令,展示所述第一目标高光片段对应的预览页面,以播放所述第一目标高光片段。
根据本公开的一个或多个实施例,示例10提供了示例1至示例9中任一示例的方法,所述目标视频为直播回放视频。
根据本公开的一个或多个实施例,示例11提供了一种视频处理装置,所述装置包括:获取模块,用于获取对目标视频进行高光识别而得到的第一高光片段;第一展示模块,用于响应于针对第一目标高光片段的编辑指令,展示所述第一目标高光片段对应的编辑页面,该编辑页面用于用户对所述第一目标高光片段实施编辑操作;第一处理模块,用于根据所述用户的编辑操作,对所述第一目标高光片段进行处理,以获得第二高光片段;第一发布模块,用于响应于接收到针对所述第二高光片段的发布指令,发布所述第二高光片段。
根据本公开的一个或多个实施例,示例12提供了一种计算机可读介质,其上存储有计算机程序,该程序被处理装置执行时实现示例1至示例10中所述方法的步骤。
根据本公开的一个或多个实施例,示例13提供了一种电子设备,包括:存储装置,其上存储有计算机程序;处理装置,用于执行所述存储装置中的所述计算机程序,以实现示例1至示例10中所述方法的步骤。
根据本公开的一个或多个实施例,示例14提供了一种计算机程序产品,包括计算机程序指令,所述计算机程序指令使得计算机执行如示例1至示例10中任一示例所述方法的步骤。
根据本公开的一个或多个实施例,示例15提供了一种计算机程序,所述计算机程序使得计算机执行如示例1至示例10中任一示例所述方法的步骤。
以上描述仅为本公开的较佳实施例以及对所运用技术原理的说明。本领域技术人员应当理解,本公开中所涉及的公开范围,并不限于上述技术特征的特定组合而成的技术方案,同时也应涵盖在不脱离上述公开构思的情况下,由上述技术特征或其等同特征进行任意组合而形成的其它技术方案。例如上述特征与本公开中公开的(但不限于)具有类似功能的技术特征进行互相替换而形成的技术方案。
此外,虽然采用特定次序描绘了各操作,但是这不应当理解为要求这些操作以所示出的特定次序或以顺序次序来执行。在一定环境下,多任务和并行处理可能是有利的。同样地,虽然在上面论述中包含了若干具体实现细节,但是这些不应当被解释为对本公开的范围的限制。在单独的实施例的上下文中描述的某些特征还可以组合地实现在单个实施例中。相反地,在单个实施例的上下文中描述的各种特征也可以单独地或以任何合适的子组合的方式实现在多个实施例中。
尽管已经采用特定于结构特征和/或方法逻辑动作的语言描述了本主题,但是应当理解,所附权利要求书中所限定的主题未必局限于上面描述的特定特征或动作。相反,上面所描述的特定特征和动作仅仅是实现权利要求书的示例形式。关于上述实施例中的装置,其中各个模块执行操作的具体方式已经在有关该方法的实施例中进行了详细描述,此处将不做详细阐述说明。

Claims (15)

  1. 一种视频处理方法,其特征在于,所述方法包括:
    获取对目标视频进行高光识别而得到的第一高光片段;
    响应于针对第一目标高光片段的编辑指令,展示所述第一目标高光片段对应的编辑页面,所述编辑页面用于用户对所述第一目标高光片段实施编辑操作;
    根据所述用户的编辑操作,对所述第一目标高光片段进行处理,以获得第二高光片段;
    响应于接收到针对所述第二高光片段的发布指令,发布所述第二高光片段。
  2. 根据权利要求1所述的方法,其特征在于,在所述获取对目标视频进行高光识别而得到的第一高光片段的步骤之后,所述方法还包括:
    展示视频回放页面,所述视频回放页面上承载有所述第一高光片段的标签信息,所述标签信息用于表征所述第一高光片段的特征信息。
  3. 根据权利要求2所述的方法,其特征在于,所述标签信息包括以下中的一项或多项:封面图像、所符合的高光识别条件、在所述目标视频中的时间信息、用于指示是否已被发布的发布信息。
  4. 根据权利要求1-3中任一项所述的方法,其特征在于,所述第一目标高光片段包括所述第一高光片段中的多个子高光片段;
    在所述响应于针对第一目标高光片段的编辑指令,展示所述第一目标高光片段对应的编辑页面之前,所述方法还包括:
    根据所述用户的选择操作,将所述多个子高光片段进行合并,以获得所述第一目标高光片段。
  5. 根据权利要求1-4中任一项所述的方法,其特征在于,所述第一目标高光片段包括所述第一高光片段中的多个子高光片段;
    所述根据所述用户的编辑操作,对所述第一目标高光片段进行处理,以获得第二高光片段,包括:
    根据所述用户的所述编辑操作,对所述多个子高光片段进行处理,并将处理后的所述多个子高光片段合并为所述第二高光片段。
  6. 根据权利要求1-5中任一项所述的方法,其特征在于,所述方法还包括:
    响应于针对第二目标高光片段的编辑指令,展示所述第二目标高光片段对应的编辑页面,其中,所述第二目标高光片段为已发布的任一高光片段,所述编辑页面用于用户对所述第二目标高光片段实施编辑操作;
    根据所述用户的编辑操作,对所述第二目标高光片段进行处理,以获得第三高光片段;
    响应于接收到针对所述第三高光片段的发布指令,发布所述第三高光片段。
  7. 根据权利要求1-6中任一项所述的方法,其特征在于,所述获取对目标视频进行高光识别而得到的第一高光片段,包括:
    获取所述目标视频;
    对所述目标视频进行高光识别,得到所述第一高光片段;或者,向服务器发送所述目标视频,以由所述服务器对所述目标视频进行高光识别;根据所述服务器的高光识别结果,获取所述第一高光片段。
  8. 根据权利要求1-7中任一项所述的方法,其特征在于,所述方法还包括:
    在获取所述第一高光片段的过程中,显示提示信息,所述提示信息用于提示所述用户正在对所述目标视频进行高光识别。
  9. 根据权利要求1-8中任一项所述的方法,其特征在于,在所述响应于针对第一目标高光片段的编辑指令,展示所述第一目标高光片段对应的编辑页面之前,所述方法还包括:
    响应于针对所述第一目标高光片段的预览指令,展示所述第一目标高光片段对应的预览页面,以播放所述第一目标高光片段。
  10. 根据权利要求1-9中任一项所述的方法,其特征在于,所述目标视频为直播回放视频。
  11. 一种视频处理装置,其特征在于,所述装置包括:
    获取模块,用于获取对目标视频进行高光识别而得到的第一高光片段;
    第一展示模块,用于响应于针对第一目标高光片段的编辑指令,展示所述第一目标高光片段对应的编辑页面,所述编辑页面用于用户对所述第一目标高光片段实施编辑操作;
    第一处理模块,用于根据所述用户的编辑操作,对所述第一目标高光片段进行处理,以获得第二高光片段;
    第一发布模块,用于响应于接收到针对所述第二高光片段的发布指令,发布所述第二高光片段。
  12. 一种计算机可读介质,其上存储有计算机程序,其特征在于,所述程序被处理装置执行时实现权利要求1-10中任一项所述方法的步骤。
  13. 一种电子设备,其特征在于,包括:
    存储装置,其上存储有计算机程序;
    处理装置,用于执行所述存储装置中的所述计算机程序,以实现权利要求1-10中任一项所述方法的步骤。
  14. 一种计算机程序产品,其特征在于,包括计算机程序指令,所述计算机程序 指令被计算设备执行时,使得所述计算设备实现如权利要求1-10中任一项所述的方法。
  15. 一种计算机程序,其特征在于,当所述计算机程序被计算设备执行时,使得所述计算设备实现如权利要求1-10中任一项所述的方法。
PCT/CN2021/076415 2020-04-02 2021-02-09 视频处理方法、装置、可读介质及电子设备 WO2021196903A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP21778706.8A EP4131981A4 (en) 2020-04-02 2021-02-09 VIDEO PROCESSING METHOD AND APPARATUS, READABLE MEDIA AND ELECTRONIC DEVICE
US17/885,459 US20220385997A1 (en) 2020-04-02 2022-08-10 Video processing method and apparatus, readable medium and electronic device

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010255900.6 2020-04-02
CN202010255900.6A CN111447489A (zh) 2020-04-02 2020-04-02 视频处理方法、装置、可读介质及电子设备

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/885,459 Continuation US20220385997A1 (en) 2020-04-02 2022-08-10 Video processing method and apparatus, readable medium and electronic device

Publications (1)

Publication Number Publication Date
WO2021196903A1 true WO2021196903A1 (zh) 2021-10-07

Family

ID=71652673

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/076415 WO2021196903A1 (zh) 2020-04-02 2021-02-09 视频处理方法、装置、可读介质及电子设备

Country Status (4)

Country Link
US (1) US20220385997A1 (zh)
EP (1) EP4131981A4 (zh)
CN (1) CN111447489A (zh)
WO (1) WO2021196903A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114222150A (zh) * 2021-11-19 2022-03-22 北京达佳互联信息技术有限公司 数据处理方法、装置、电子设备及存储介质

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111447489A (zh) * 2020-04-02 2020-07-24 北京字节跳动网络技术有限公司 视频处理方法、装置、可读介质及电子设备
CN112084369A (zh) * 2020-08-03 2020-12-15 广州数说故事信息科技有限公司 一种基于视频直播高光时刻挖掘方法和模型
CN112069360A (zh) * 2020-09-15 2020-12-11 北京字跳网络技术有限公司 音乐海报生成方法、装置、电子设备及介质
CN114363641A (zh) * 2020-10-13 2022-04-15 阿里巴巴集团控股有限公司 目标视频生成方法及装置
CN112380929A (zh) * 2020-10-30 2021-02-19 北京字节跳动网络技术有限公司 一种高光片段的获取方法、装置、电子设备和存储介质
CN112533008A (zh) * 2020-11-16 2021-03-19 北京达佳互联信息技术有限公司 视频回放方法、装置、电子设备及存储介质
CN112714340B (zh) * 2020-12-22 2022-12-06 北京百度网讯科技有限公司 视频处理方法、装置、设备、存储介质和计算机程序产品
CN113014948B (zh) * 2021-03-08 2023-11-03 广州市网星信息技术有限公司 一种视频录制及合成方法、装置、设备及存储介质
CN113225571B (zh) * 2021-03-25 2022-10-04 海南车智易通信息技术有限公司 一种直播封面的处理系统、方法及计算设备
CN113722040B (zh) * 2021-09-07 2023-09-05 北京达佳互联信息技术有限公司 作品处理方法、装置、计算机设备及介质

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104811787A (zh) * 2014-10-27 2015-07-29 深圳市腾讯计算机系统有限公司 游戏视频录制方法及装置
CN106303723A (zh) * 2016-08-11 2017-01-04 网易(杭州)网络有限公司 视频处理方法和装置
US20180025078A1 (en) * 2016-07-21 2018-01-25 Twitter, Inc. Live video streaming services with machine-learning based highlight replays
CN108062409A (zh) * 2017-12-29 2018-05-22 北京奇艺世纪科技有限公司 直播视频摘要的生成方法、装置及电子设备
CN108833969A (zh) * 2018-06-28 2018-11-16 腾讯科技(深圳)有限公司 一种直播流的剪辑方法、装置以及设备
CN109120987A (zh) * 2018-09-20 2019-01-01 珠海市君天电子科技有限公司 一种视频录制方法、装置、终端及计算机可读存储介质
CN111447489A (zh) * 2020-04-02 2020-07-24 北京字节跳动网络技术有限公司 视频处理方法、装置、可读介质及电子设备

Family Cites Families (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120087637A1 (en) * 2002-01-29 2012-04-12 Logan James D Methods and apparatus for recording and replaying video broadcasts
EP1958420A2 (en) * 2005-12-04 2008-08-20 Turner Broadcast System, Inc (TBS, Inc.) System and method for delivering video and audio content over a network
US9953034B1 (en) * 2012-04-17 2018-04-24 Google Llc System and method for sharing trimmed versions of digital media items
US10068614B2 (en) * 2013-04-26 2018-09-04 Microsoft Technology Licensing, Llc Video service with automated video timeline curation
US20140376887A1 (en) * 2013-06-24 2014-12-25 Adobe Systems Incorporated Mobile device video selection and edit
US9999836B2 (en) * 2013-11-20 2018-06-19 Microsoft Technology Licensing, Llc User-defined channel
CN110383848B (zh) * 2017-03-07 2022-05-06 交互数字麦迪逊专利控股公司 用于多设备呈现的定制视频流式传输
US10664524B2 (en) * 2017-09-13 2020-05-26 Facebook, Inc. Highlighting portions of a live video broadcast
CN108924576A (zh) * 2018-07-10 2018-11-30 武汉斗鱼网络科技有限公司 一种视频标注方法、装置、设备及介质
CN109637561A (zh) * 2018-11-13 2019-04-16 成都依能科技股份有限公司 一种多通道音视频自动智能编辑方法
CN109547841B (zh) * 2018-12-20 2020-02-07 北京微播视界科技有限公司 短视频数据的处理方法、装置及电子设备
CN109640173B (zh) * 2019-01-11 2020-09-15 腾讯科技(深圳)有限公司 一种视频播放方法、装置、设备及介质
CN109640112B (zh) * 2019-01-15 2021-11-23 广州虎牙信息科技有限公司 视频处理方法、装置、设备及存储介质
CN110798716A (zh) * 2019-11-19 2020-02-14 深圳市迅雷网络技术有限公司 视频精彩内容播放方法以及相关装置

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104811787A (zh) * 2014-10-27 2015-07-29 深圳市腾讯计算机系统有限公司 游戏视频录制方法及装置
US20180025078A1 (en) * 2016-07-21 2018-01-25 Twitter, Inc. Live video streaming services with machine-learning based highlight replays
CN106303723A (zh) * 2016-08-11 2017-01-04 网易(杭州)网络有限公司 视频处理方法和装置
CN108062409A (zh) * 2017-12-29 2018-05-22 北京奇艺世纪科技有限公司 直播视频摘要的生成方法、装置及电子设备
CN108833969A (zh) * 2018-06-28 2018-11-16 腾讯科技(深圳)有限公司 一种直播流的剪辑方法、装置以及设备
CN109120987A (zh) * 2018-09-20 2019-01-01 珠海市君天电子科技有限公司 一种视频录制方法、装置、终端及计算机可读存储介质
CN111447489A (zh) * 2020-04-02 2020-07-24 北京字节跳动网络技术有限公司 视频处理方法、装置、可读介质及电子设备

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP4131981A4 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114222150A (zh) * 2021-11-19 2022-03-22 北京达佳互联信息技术有限公司 数据处理方法、装置、电子设备及存储介质

Also Published As

Publication number Publication date
CN111447489A (zh) 2020-07-24
US20220385997A1 (en) 2022-12-01
EP4131981A1 (en) 2023-02-08
EP4131981A4 (en) 2023-07-19

Similar Documents

Publication Publication Date Title
WO2021196903A1 (zh) 视频处理方法、装置、可读介质及电子设备
WO2021004221A1 (zh) 特效的展示处理方法、装置及电子设备
WO2021008223A1 (zh) 信息的确定方法、装置及电子设备
WO2021249168A1 (zh) 视频处理方法、装置、电子设备及计算机可读存储介质
WO2021218518A1 (zh) 视频的处理方法、装置、设备及介质
WO2023072296A1 (zh) 多媒体信息处理方法、装置、电子设备和存储介质
WO2022042389A1 (zh) 搜索结果的展示方法、装置、可读介质和电子设备
WO2023165515A1 (zh) 拍摄方法、装置、电子设备和存储介质
WO2024008184A1 (zh) 一种信息展示方法、装置、电子设备、计算机可读介质
WO2021057740A1 (zh) 视频生成方法、装置、电子设备和计算机可读介质
WO2023005831A1 (zh) 一种资源播放方法、装置、电子设备和存储介质
WO2021197024A1 (zh) 视频特效配置文件生成方法、视频渲染方法及装置
WO2022262645A1 (zh) 视频的处理方法、装置、电子设备和存储介质
CN111818383B (zh) 视频数据的生成方法、系统、装置、电子设备及存储介质
WO2023169356A1 (zh) 图像处理方法、装置、设备及存储介质
WO2023179424A1 (zh) 弹幕添加方法、装置、电子设备和存储介质
WO2023088006A1 (zh) 云游戏交互方法、装置、可读介质和电子设备
WO2021227953A1 (zh) 图像特效配置方法、图像识别方法、装置及电子设备
WO2021218981A1 (zh) 互动记录的生成方法、装置、设备及介质
WO2024037491A1 (zh) 媒体内容处理方法、装置、设备及存储介质
WO2023134617A1 (zh) 一种模板选择方法、装置、电子设备及存储介质
WO2023279951A1 (zh) 录屏视频的处理方法、装置、可读介质和电子设备
WO2022218109A1 (zh) 交互方法, 装置, 电子设备及计算机可读存储介质
WO2021031909A1 (zh) 数据内容的输出方法、装置、电子设备及计算机可读介质
CN112153439A (zh) 互动视频处理方法、装置、设备及可读存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21778706

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2021778706

Country of ref document: EP

Effective date: 20221102