CN114630141A - Video processing method and related equipment - Google Patents

Video processing method and related equipment Download PDF

Info

Publication number
CN114630141A
CN114630141A CN202210271174.6A CN202210271174A CN114630141A CN 114630141 A CN114630141 A CN 114630141A CN 202210271174 A CN202210271174 A CN 202210271174A CN 114630141 A CN114630141 A CN 114630141A
Authority
CN
China
Prior art keywords
sliding window
previous
current
data
interaction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210271174.6A
Other languages
Chinese (zh)
Inventor
向君
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Dajia Internet Information Technology Co Ltd
Original Assignee
Beijing Dajia Internet Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Dajia Internet Information Technology Co Ltd filed Critical Beijing Dajia Internet Information Technology Co Ltd
Priority to CN202210271174.6A priority Critical patent/CN114630141A/en
Publication of CN114630141A publication Critical patent/CN114630141A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/2187Live feed
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/44016Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving splicing one content stream with another content stream, e.g. for substituting a video clip
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • H04N21/47217End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for controlling playback functions for recorded or on-demand content, e.g. using progress bars, mode or play-point indicators or bookmarks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/488Data services, e.g. news ticker
    • H04N21/4884Data services, e.g. news ticker for displaying subtitles
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/845Structuring of content, e.g. decomposing content into time segments
    • H04N21/8456Structuring of content, e.g. decomposing content into time segments by decomposing the content in the time domain, e.g. in time segments

Abstract

The embodiment of the disclosure provides a video processing method and related equipment. The method comprises the following steps: acquiring a video to be processed; respectively obtaining current operation feedback data of a current sliding window of a video to be processed, previous operation feedback data of a previous sliding window and next operation feedback data of a next sliding window, wherein the current sliding window is at least partially overlapped with the previous sliding window, and the current sliding window is at least partially overlapped with the next sliding window; if the current operation feedback data is larger than the last operation feedback data and the next operation feedback data, determining a target time period according to the last sliding window and the next sliding window; and marking a target video clip corresponding to the target time period in the video to be processed. The method can automatically mark the target video segment corresponding to the target time period in the video to be processed, improve the marking accuracy of the target video segment and save the marking time of the target video segment.

Description

Video processing method and related equipment
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to a video processing method, a video processing apparatus, an electronic device, a computer-readable storage medium, and a computer program product.
Background
With the development of internet technology and the application development of intelligent devices, applications of watching videos and watching live broadcasts are more and more popular, and the watching of videos and the watching of live broadcasts become a part of life.
In the related art, a live recorded video is subjected to point-of-view marking by using an artificial marking method for video editing. However, the manual marking method takes a lot of time and is poor in marking accuracy.
It is to be noted that the information disclosed in the above background section is only for enhancement of understanding of the background of the present disclosure, and thus may include information that does not constitute prior art known to those of ordinary skill in the art.
Disclosure of Invention
The embodiment of the disclosure provides a video processing method, a video processing device, an electronic device, a computer-readable storage medium and a computer program product, which can automatically mark a target video segment corresponding to a target time period in a video to be processed, improve the marking accuracy of the target video segment and save the marking time of the target video segment.
The embodiment of the disclosure provides a video processing method, which comprises the following steps: acquiring a video to be processed; respectively obtaining current operation feedback data of a current sliding window of the video to be processed, previous operation feedback data of a previous sliding window and next operation feedback data of a next sliding window, wherein the current sliding window is at least partially overlapped with the previous sliding window, and the current sliding window is at least partially overlapped with the next sliding window; if the current operation feedback data is larger than the last operation feedback data and the next operation feedback data, determining a target time period according to the last sliding window and the next sliding window; and marking a target video clip corresponding to the target time period in the video to be processed.
In some exemplary embodiments of the present disclosure, the current operation feedback data includes current resource data and current interaction data of the current sliding window, the previous operation feedback data includes previous resource data and previous interaction data of the previous sliding window, and the next operation feedback data includes next resource data and next interaction data of the next sliding window; the target time period comprises a first target time period; wherein, if the current operation feedback data is greater than the next operation feedback data, determining a target time period according to the previous sliding window and the next sliding window, including: if the current resource data is larger than the last resource data and the next resource data; or, if the current interaction data is larger than the previous interaction data and the next interaction data, determining a first target time period according to the previous sliding window and the next sliding window.
In some exemplary embodiments of the present disclosure, the current interaction data includes a current interaction number and a current interaction keyword number, the previous interaction data includes a previous interaction number and a previous interaction keyword number, and the next interaction data includes a next interaction number and a next interaction keyword number; if the current resource data is larger than the previous resource data and the next resource data; or, if the current interaction data is greater than the previous interaction data and the next interaction data, determining a first target time period according to the previous sliding window and the next sliding window, including: if the current resource data is larger than the previous resource data and the next resource data; or the current interaction number is greater than the previous interaction number and the next interaction number; or, if the number of the current interactive keywords is greater than the number of the previous interactive keywords and the number of the next interactive keywords, determining the first target time period according to the previous sliding window and the next sliding window.
In some exemplary embodiments of the present disclosure, the target time period further comprises a second target time period; wherein, if the current operation feedback data is greater than the previous operation feedback data and the next operation feedback data, determining a target time period according to the previous sliding window and the next sliding window, further comprising: and if the current resource data is larger than the previous resource data and the next resource data, and the current interaction data is larger than the previous interaction data and the next interaction data, determining a second target time period according to the previous sliding window and the next sliding window.
In some exemplary embodiments of the present disclosure, marking a target video segment corresponding to the target time period in the to-be-processed video includes: marking a first target progress bar corresponding to the first target time period in the progress bar of the video to be processed in a first mode; and marking a second target progress bar corresponding to the second target time period in the progress bar of the video to be processed by using a second mode.
In some exemplary embodiments of the present disclosure, the current interaction data includes a current interaction number and a current interaction keyword number, the previous interaction data includes a previous interaction number and a previous interaction keyword number, and the next interaction data includes a next interaction number and a next interaction keyword number; the second target time period comprises a first target sub-time period; wherein if the current resource data is greater than the previous resource data and the next resource data, and the current interaction data is greater than the previous interaction data and greater than the next interaction data, determining a second target time period according to the previous sliding window and the next sliding window, including: if the current resource data is larger than the previous resource data and the next resource data, the current interaction number is larger than the previous interaction number and the next interaction number, and the current interaction keyword number is smaller than or equal to the previous interaction keyword number or the next interaction keyword number, determining the first target sub-time period according to the previous sliding window and the next sliding window; if the current resource data is larger than the previous resource data and the next resource data, the number of the current interactive keywords is larger than the number of the previous interactive keywords and the number of the next interactive keywords, and the number of the current interactive keywords is smaller than or equal to the number of the previous interactive keywords or the number of the next interactive keywords, determining the first target sub-time period according to the previous sliding window and the next sliding window; and if the current interaction number is larger than the previous interaction number and the next interaction number, the current interaction keyword number is larger than the previous interaction keyword number and the next interaction keyword number, and the current resource data is smaller than or equal to the previous current resource data or the next current resource data, determining the first target sub-time period according to the previous sliding window and the next sliding window.
In some exemplary embodiments of the present disclosure, the second target time period further comprises a second target sub-time period; wherein, if the current resource data is greater than the previous resource data and the next resource data, and the current interaction data is greater than the previous interaction data and greater than the next interaction data, determining a second target time period according to the previous sliding window and the next sliding window, further comprising: if the current resource data is larger than the previous resource data and the next resource data; and the current number of interactions is greater than the previous number of interactions and the next number of interactions; and if the number of the current interactive keywords is greater than the number of the last interactive keywords and the number of the next interactive keywords, determining the second target sub-time period according to the last sliding window and the next sliding window.
In some exemplary embodiments of the present disclosure, marking a target video segment corresponding to the target time period in the to-be-processed video includes: marking a first target progress bar corresponding to the first target time period in the progress bar of the video to be processed by using a first mode; marking a second target progress bar corresponding to the first target sub-time period in the progress bar of the video to be processed by using a second mode; and marking a third target progress bar corresponding to the second target sub-time period in the progress bar of the video to be processed by using a third mode.
In some exemplary embodiments of the present disclosure, determining a target time period according to the previous sliding window and the next sliding window includes: determining the starting time of the last sliding window as the starting time of the target time period, and determining the ending time of the next sliding window as the ending time of the target time period; and determining the target time period according to the starting time of the target time period and the ending time of the target time period.
In some exemplary embodiments of the present disclosure, the current sliding window, the previous sliding window, and the next sliding window have the same window length; the starting time of the current sliding window is the middle time of the last sliding window, and the ending time of the current sliding window is the middle time of the next sliding window.
An embodiment of the present disclosure provides a video processing apparatus, including: an acquisition module configured to perform acquiring a video to be processed; an obtaining module configured to perform obtaining current operation feedback data of a current sliding window of the video to be processed, previous operation feedback data of a previous sliding window, and next operation feedback data of a next sliding window, respectively, where the current sliding window and the previous sliding window are at least partially overlapped, and the current sliding window and the next sliding window are at least partially overlapped; a determining module configured to determine a target time period according to the previous sliding window and the next sliding window if the current operation feedback data is greater than the previous operation feedback data and the next operation feedback data; a marking module configured to mark a target video segment corresponding to the target time period in the video to be processed.
In some exemplary embodiments of the present disclosure, the current operation feedback data includes current resource data and current interaction data of the current sliding window, the previous operation feedback data includes previous resource data and previous interaction data of the previous sliding window, and the next operation feedback data includes next resource data and next interaction data of the next sliding window; the target time period comprises a first target time period; the determining module is further configured to perform the determining if the current resource data is greater than the previous resource data and the next resource data; or, if the current interaction data is larger than the previous interaction data and the next interaction data, determining a first target time period according to the previous sliding window and the next sliding window.
In some exemplary embodiments of the present disclosure, the current interaction data includes a current interaction number and a current interaction keyword number, the previous interaction data includes a previous interaction number and a previous interaction keyword number, and the next interaction data includes a next interaction number and a next interaction keyword number; the determining module is further configured to perform the determining if the current resource data is greater than the previous resource data and the next resource data; or the current interaction number is greater than the previous interaction number and the next interaction number; or, if the number of the current interactive keywords is greater than the number of the previous interactive keywords and the number of the next interactive keywords, determining the first target time period according to the previous sliding window and the next sliding window.
In some exemplary embodiments of the present disclosure, the target time period further comprises a second target time period; the determining module is further configured to determine a second target time period according to the previous sliding window and the next sliding window if the current resource data is greater than the previous resource data and the next resource data, and the current interaction data is greater than the previous interaction data and greater than the next interaction data.
In some exemplary embodiments of the present disclosure, the marking module is further configured to perform marking, in the progress bar of the video to be processed, a first target progress bar corresponding to the first target time period in a first manner; the marking module is further configured to mark a second target progress bar corresponding to the second target time period in a progress bar of the video to be processed by using a second mode.
In some exemplary embodiments of the present disclosure, the current interaction data includes a current interaction number and a current interaction keyword number, the previous interaction data includes a previous interaction number and a previous interaction keyword number, and the next interaction data includes a next interaction number and a next interaction keyword number; the second target time period comprises a first target sub-time period; the determining module is further configured to determine the first target sub-time period according to the previous sliding window and the next sliding window if the current resource data is larger than the previous resource data and the next resource data, the current interaction number is larger than the previous interaction number and the next interaction number, and the current interaction keyword number is smaller than or equal to the previous interaction keyword number or the next interaction keyword number; the determining module is further configured to determine the first target sub-time period according to the previous sliding window and the next sliding window if the current resource data is larger than the previous resource data and the next resource data, the number of the current interactive keywords is larger than the number of the previous interactive keywords and the number of the next interactive keywords, and the number of the current interactive keywords is smaller than or equal to the number of the previous interactive keywords or the number of the next interactive keywords; the determining module is further configured to determine the first target sub-period according to the previous sliding window and the next sliding window if the current interaction number is greater than the previous interaction number and the next interaction number, the current interaction keyword number is greater than the previous interaction keyword number and the next interaction keyword number, and the current resource data is less than or equal to the previous current resource data or the next current resource data.
In some exemplary embodiments of the present disclosure, the second target time period further comprises a second target sub-time period; the determining module is further configured to perform the determining if the current resource data is greater than the previous resource data and the next resource data; the current interaction quantity is greater than the previous interaction quantity and the next interaction quantity; and if the number of the current interactive keywords is greater than the number of the last interactive keywords and the number of the next interactive keywords, determining the second target sub-time period according to the last sliding window and the next sliding window.
In some exemplary embodiments of the present disclosure, the marking module is further configured to perform marking a first target progress bar corresponding to the first target time period in a progress bar of the video to be processed in a first manner; the marking module is further configured to mark a second target progress bar corresponding to the first target sub-time period in a progress bar of the video to be processed in a second mode; the marking module is further configured to mark a third target progress bar corresponding to the second target sub-period in the progress bar of the video to be processed in a third mode.
In some exemplary embodiments of the present disclosure, the determining module is further configured to perform determining a start time of the last sliding window as a start time of the target time period, and determining an end time of the next sliding window as an end time of the target time period; and determining the target time period according to the starting time of the target time period and the ending time of the target time period.
In some exemplary embodiments of the present disclosure, the current sliding window, the last sliding window, and the next sliding window have the same window length; the starting time of the current sliding window is the middle time of the last sliding window, and the ending time of the current sliding window is the middle time of the next sliding window.
An embodiment of the present disclosure provides an electronic device, including: a processor; a memory for storing processor-executable instructions; wherein the processor is configured to execute the executable instructions to implement the video processing method as any one of the above.
The disclosed embodiments provide a computer-readable storage medium, whose instructions, when executed by a processor of an electronic device, enable the electronic device to perform a video processing method as any one of the above.
The disclosed embodiments provide a computer program product comprising a computer program that when executed by a processor implements the video processing method of any of the above.
According to the video processing method provided by some embodiments of the present disclosure, current operation feedback data of a current sliding window of a video to be processed, previous operation feedback data of a previous sliding window, and next operation feedback data of a next sliding window are obtained, respectively; on the other hand, when the current operation feedback data is larger than the previous operation feedback data and the next operation feedback data, the target time period is determined according to the previous sliding window and the next sliding window, so that the target video clip corresponding to the target time period can be automatically marked in the video to be processed, the marking accuracy of the target video clip is improved, and the marking time of the target video clip is saved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure. It is to be understood that the drawings in the following description are merely exemplary of the disclosure, and that other drawings may be derived from those drawings by one of ordinary skill in the art without the exercise of inventive faculty.
Fig. 1 shows a schematic diagram of an exemplary system architecture to which the video processing method of the embodiments of the present disclosure may be applied.
Fig. 2 is a flow diagram illustrating a video processing method according to an example embodiment.
FIG. 3 is a schematic diagram of a sliding window shown according to an example.
FIG. 4 is a schematic diagram illustrating a sliding window sliding update, according to an example.
Fig. 5 is a diagram illustrating operational feedback data for a sliding window of a pending video according to an example.
Fig. 6 is a block diagram illustrating a video processing apparatus according to an example embodiment.
FIG. 7 is a block diagram illustrating an electronic device suitable for use in implementing exemplary embodiments of the present disclosure, according to an exemplary embodiment.
Detailed Description
Example embodiments will now be described more fully with reference to the accompanying drawings. Example embodiments may, however, be embodied in many different forms and should not be construed as limited to the embodiments set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of example embodiments to those skilled in the art. The same reference numerals in the drawings denote the same or similar parts, and a repetitive description thereof will be omitted.
The described features, structures, or characteristics of the disclosure may be combined in any suitable manner in one or more embodiments. In the following description, numerous specific details are provided to give a thorough understanding of embodiments of the disclosure. One skilled in the relevant art will recognize, however, that the subject matter of the present disclosure can be practiced without one or more of the specific details, or with other methods, components, devices, steps, and the like. In other instances, well-known methods, devices, implementations, or operations have not been shown or described in detail to avoid obscuring aspects of the disclosure.
The drawings are merely schematic illustrations of the present disclosure, in which the same reference numerals denote the same or similar parts, and thus, a repetitive description thereof will be omitted. Some of the block diagrams shown in the figures do not necessarily correspond to physically or logically separate entities. These functional entities may be implemented in the form of software, or in at least one hardware module or integrated circuit, or in different networks and/or processor means and/or microcontroller means.
The flow charts shown in the drawings are merely illustrative and do not necessarily include all of the contents and steps, nor do they necessarily have to be performed in the order described. For example, some steps may be decomposed, and some steps may be combined or partially combined, so that the actual execution sequence may be changed according to the actual situation.
In this specification, the terms "a", "an", "the", "said" and "at least one" are used to indicate the presence of at least one element/component/etc.; the terms "comprising," "including," and "having" are intended to be inclusive and mean that there may be additional elements/components/etc. other than the listed elements/components/etc.; the terms "first," "second," and "third," etc. are used merely as labels, and are not limiting on the number of their objects.
Fig. 1 shows a schematic diagram of an exemplary system architecture to which the video processing method of the embodiments of the present disclosure may be applied.
As shown in fig. 1, the system architecture may include a server 101, a network 102, a terminal device 103, a terminal device 104, and a terminal device 105. Network 102 is the medium used to provide communication links between terminal device 103, terminal device 104, or terminal device 105, and server 101. Network 102 may include various connection types, such as wired, wireless communication links, or fiber optic cables, to name a few.
The server 101 may be a server that provides various services, such as a background management server that provides support for devices operated by the user using the terminal apparatus 103, the terminal apparatus 104, or the terminal apparatus 105. The background management server may analyze and otherwise process the received data such as the request, and feed back the processing result to the terminal device 103, the terminal device 104, or the terminal device 105.
Terminal device 103, terminal device 104, and terminal device 105 may be, but are not limited to, a smart phone, a tablet computer, a laptop computer, a desktop computer, a smart speaker, a wearable smart device, a virtual reality device, an augmented reality device, and the like.
In the embodiment of the present disclosure, the terminal device 103, the terminal device 104, and the terminal device 105 may record a to-be-processed video, and send the to-be-processed video to the server 101 for processing.
In the embodiment of the present disclosure, the server 101 may: acquiring a video to be processed; respectively obtaining current operation feedback data of a current sliding window of a video to be processed, previous operation feedback data of a previous sliding window and next operation feedback data of a next sliding window, wherein the current sliding window is at least partially overlapped with the previous sliding window, and the current sliding window is at least partially overlapped with the next sliding window; if the current operation feedback data are larger than the last operation feedback data and the next operation feedback data, determining a target time period according to the last sliding window and the next sliding window; and marking a target video clip corresponding to the target time period in the video to be processed.
In the embodiment of the present disclosure, the server 101 may send the marked video to the terminal device 103, the terminal device 104, and the terminal device 105 for playing.
It should be understood that the numbers of the terminal device 103, the terminal device 104, the terminal device 105, the network 102 and the server 101 in fig. 1 are only illustrative, and the server 101 may be a physical server, a server cluster composed of a plurality of servers, a cloud server, and any number of terminal devices, networks and servers according to actual needs.
Hereinafter, the steps of the video processing method in the exemplary embodiment of the present disclosure will be described in more detail with reference to the drawings and the embodiment. The method provided by the embodiment of the present disclosure may be executed by any electronic device, for example, the server 101 and/or the terminal device 103 in fig. 1, but the present disclosure is not limited thereto.
Fig. 2 is a flow diagram illustrating a video processing method according to an exemplary embodiment.
As shown in fig. 2, a method provided by the embodiment of the present disclosure may include the following steps.
In step S210, a video to be processed is acquired.
In the embodiment of the present disclosure, the video to be processed may be various types of videos, for example, a video recorded in a live broadcast, a video played on a video playing platform, a video recorded through a terminal device, and the like, which is not limited in the present disclosure.
For example, the to-be-processed video uploaded by the terminal device may be directly received, or the to-be-processed video may be directly obtained from a network or a database, or when the number of the to-be-processed videos is large or the memory is large, a video processing request sent by the terminal device may be received, where the video processing request carries a storage address of the to-be-processed video, and the to-be-processed video is obtained from the memory, the cache, or a third-party database of the terminal device according to the storage address.
In step S220, current operation feedback data of a current sliding window of the video to be processed, previous operation feedback data of a previous sliding window, and next operation feedback data of a next sliding window are obtained, where the current sliding window and the previous sliding window at least partially overlap each other, and the current sliding window and the next sliding window at least partially overlap each other.
In the embodiment of the present disclosure, the sliding window may be used to obtain the operation feedback data of the video to be processed, the sliding window may be used to slide on the video to be processed from a certain time of the video to be processed, and the operation feedback data in the sliding window is obtained every time the sliding window slides to a position, so that a plurality of sliding windows and operation feedback data thereof may be obtained.
In this disclosure, the current sliding window may be any one of the obtained sliding windows, where the last sliding window refers to a last sliding window of the current sliding window, the next sliding window refers to a next sliding window of the current sliding window, the current sliding window and the last sliding window at least partially overlap, and the current sliding window and the next sliding window at least partially overlap.
In the embodiments of the present disclosure, the window lengths of the current sliding window, the previous sliding window, and the next sliding window may be the same or different, and in the following description, the window lengths of the current sliding window, the previous sliding window, and the next sliding window are all taken as examples for description, but the present disclosure is not limited thereto.
In the embodiment of the present disclosure, the window lengths of the current sliding window, the previous sliding window, and the next sliding window may be set according to actual requirements, for example, may be set to 10 minutes, 20 minutes, 1 hour, and the like, and in the following description, the window length is 10 minutes as an example, but the present disclosure is not limited thereto.
In the embodiment of the present disclosure, the starting time of the current sliding window may be a middle time of a previous sliding window, and the ending time of the current sliding window may be a middle time of a next sliding window, that is, the sliding window may slide by 1/2 steps of the window length, that is, half of the current sliding window and the previous sliding window are overlapped, and half of the current sliding window and half of the next sliding window are overlapped, in the following exemplary description, the sliding window may slide by 1/2 steps of the window length, but those skilled in the art may set the sliding step length according to actual situations, and the present disclosure does not limit the sliding step length.
For example, when the window length is 10 minutes, taking the example of starting processing from the start time (e.g., 0:00) of the video to be processed, the first sliding window is 0:00 to 0:10, the second sliding window is 0:05 to 0:15, the third sliding window is 0:10 to 0:20, the fourth sliding window is 0:15 to 0:25, and so on until the end time (or the designated time) of the video to be processed.
FIG. 3 is a schematic diagram of a sliding window shown according to an example.
Referring to FIG. 3, for example, tkIndicating the current sliding window, tkFor example, 0:30 to 0: 40; t is tk-1Represents the last sliding window, tk-1For example, 0:25 to 0: 35; t is tk+1Denotes the next sliding window, tk+1For example, 0:35 to 0:45, and k is a positive integer greater than 1.
In the embodiment of the present disclosure, the operation feedback data refers to data generated by an operation performed on a to-be-processed video by a user when the user watches the to-be-processed video, where the performed operation may include, but is not limited to, commenting, sending a barrage, giving away a prop, and the like, and the operation feedback data is, for example, data generated by an operation of sending a barrage when the user watches a live broadcast or a movie.
It should be noted that, the personal information data referred in the embodiments of the present disclosure are all authorized by the user, and the acquisition, storage, processing, transmission, and the like of the personal information all meet the requirements of relevant laws and regulations.
In the embodiment of the present disclosure, the current operation feedback data refers to operation feedback data in a current sliding window, the previous operation feedback data refers to operation feedback data in a previous sliding window, and the next operation feedback data refers to operation feedback data in a next sliding window.
With continued reference to FIG. 3, the current sliding window t may be statistically obtained in real-timekCurrent operation feedback data, last current sliding window tk-1And obtaining a next sliding window tk+1The next operation of (2) feeds back data.
FIG. 4 is a schematic diagram illustrating a sliding window sliding update, according to an example.
Referring to fig. 4, with the current sliding window tkThe window length of (a) is L, and the current sliding window t is taken as an example for explanationkUpdated to the next sliding window tk+1Then, the current sliding window t is setkThe information of the front L/2 length of the window is moved out of the window, and the current sliding window tkThe information of the rear L/2 length supplements the L/2 window forward, and the information of the new L/2 length enters the window to form the next sliding window tk+1
In step S230, if the current operation feedback data is greater than the previous operation feedback data and the next operation feedback data, the target time period is determined according to the previous sliding window and the next sliding window.
In the embodiment of the present disclosure, if the current operation feedback data is greater than the previous operation feedback data and the current operation feedback data is greater than the next operation feedback data, that is, the current operation feedback data is a maximum value (or a maximum value in a local time period), the target time period may be determined according to the previous sliding window and the next sliding window.
In the embodiment of the present disclosure, a window in which a maximum value occurs (i.e., a current sliding window) and two windows adjacent to and before the window (i.e., a previous sliding window and a next sliding window) may be determined as the target time period.
In an exemplary embodiment, the start time of the last sliding window may be determined as the start time of the target time period, and the end time of the next sliding window may be determined as the end time of the target time period; and determining the target time period according to the starting time of the target time period and the ending time of the target time period.
With continued reference to FIG. 3, for example, the current sliding window tkIs larger than the last sliding window tk-1And the current sliding window tkIs greater than the next sliding window tk+1The next behavior data of (2), the last sliding window t can be setk-1Is taken as the start time of the target time period, the next sliding window t is taken as the start time of the target time periodk+1The end time 0:45 of (2) is used as the end time of the target time period, namely the target time period is 0: 25-0: 45.
In step S240, a target video segment corresponding to the target time period is marked in the video to be processed.
In the embodiment of the present disclosure, the target video segment may be a segment with a viewpoint in the video to be processed, such as highlight time of a game, live broadcast and fun time, highlight time of a movie and television play, and the like; there may be one or more target video segments in a segment of video to be processed, which is not limited by this disclosure.
In the embodiment of the disclosure, the target video segment corresponding to the target time period may be marked in the video to be processed, for example, if the duration of the video to be processed is 5 hours, and the target time period is 0:25 to 0:45, 0:25 to 0:45 may be marked, so as to facilitate subsequent editing of the video to be processed.
In the embodiment of the present disclosure, a target progress bar of a target video segment corresponding to a target time period may be marked in a progress bar of a video to be processed, for example, the target progress bar may be marked with a color mark or a special symbol.
In the embodiment of the disclosure, after the target video segment corresponding to the target time period is marked in the video to be processed, the clip segment of the video to be processed can be automatically generated according to the target video segment, that is, the highlight moment in the video to be processed can be automatically clipped out for the user to watch.
According to the video processing method provided by the embodiment of the disclosure, the current operation feedback data of the current sliding window of a video to be processed, the previous operation feedback data of the previous sliding window and the next operation feedback data of the next sliding window are respectively obtained, on one hand, the current sliding window and the previous sliding window are at least partially overlapped, and the current sliding window and the next sliding window are at least partially overlapped, so that the reacquisition of the operation feedback data can be reduced, the time complexity is reduced, and the computer resources are saved; on the other hand, when the current operation feedback data is larger than the previous operation feedback data and the next operation feedback data, the target time period is determined according to the previous sliding window and the next sliding window, so that the target video clip corresponding to the target time period can be automatically marked in the video to be processed, the marking accuracy of the target video clip is improved, and the marking time of the target video clip is saved.
In addition, the marked target video segments can be used for subsequent video clips, and the video clipping efficiency is improved.
In the following, the current operation feedback data includes the current resource data and the current interaction data of the current sliding window, the previous operation feedback data includes the previous resource data and the previous interaction data of the previous sliding window, and the next operation feedback data includes the next resource data and the next interaction data of the next sliding window.
In the embodiment of the present disclosure, the interactive data is interactive data between a user watching a live broadcast and a main broadcast or other users watching the live broadcast, or the interactive data is interactive data between a user watching a video and a video publisher or other users watching the video. For example, like data, comment data, barrage data, etc. of a user watching a live or video.
In the embodiment of the present disclosure, the resource data is resource data donated to the anchor by a user watching a live broadcast, or the resource data is resource data donated to a video publisher by a user watching a video. For example, item data, gift data, etc. that a user watching a live or video gifts to a host or video publisher.
In the following description, the video to be processed is recorded in a live broadcast, the resource data is gift data, and the interactive data is bullet screen data, but the disclosure is not limited thereto.
In an exemplary embodiment, the step S230 may include: if the current resource data is larger than the previous resource data and the next resource data; or if the current interactive data is larger than the previous interactive data and the next interactive data, determining a first target time period according to the previous sliding window and the next sliding window.
In the embodiment of the present disclosure, it may be determined whether the current gift data is simultaneously greater than the previous gift data and the next gift data, and whether the current bullet screen data is simultaneously greater than the previous bullet screen data and the next bullet screen data, if the two conditions are only satisfied, the probability that the time period formed by the previous sliding window and the next sliding window is taken as the viewpoint is considered to be smaller, and the time period formed by the previous sliding window and the next sliding window may be determined as the first target time period.
The gift data may be at least one of a gift amount and a gift sum.
In an exemplary embodiment, the step S230 may further include: and if the current resource data is larger than the previous resource data and the next resource data, and the current interaction data is larger than the previous interaction data and the next interaction data, determining a second target time period according to the previous sliding window and the next sliding window.
In the embodiment of the present disclosure, it may be determined whether the current gift data is simultaneously greater than the previous gift data and the next gift data, and whether the current bullet screen data is simultaneously greater than the previous bullet screen data and the next bullet screen data, if the two conditions are simultaneously satisfied, the probability that the time period formed by the previous sliding window and the next sliding window is regarded as the viewpoint is relatively high, and the time period formed by the previous sliding window and the next sliding window may be determined as the second target time period.
In the embodiment of the present disclosure, the target video segment corresponding to the second target time period has more operation feedback data than the target video segment corresponding to the first target time period, that is, the probability that the target video segment corresponding to the second target time period is used as a viewpoint is higher than the probability that the target video segment corresponding to the first target time period is used as a viewpoint.
In an exemplary embodiment, the step S240 may include: marking a target video segment corresponding to a target time period in a video to be processed, comprising: marking a first target progress bar corresponding to a first target time period in the progress bar of the video to be processed by using a first mode; and marking a second target progress bar corresponding to the second target time period in the progress bar of the video to be processed by using a second mode.
In the embodiment of the disclosure, different target progress bars corresponding to different target time periods can be marked in different ways.
In the embodiment of the present disclosure, the first manner and the second manner may be, for example, using different color marks, for example, a yellow mark is used in the first manner, and a red mark is used in the second manner; alternatively, the first and second patterns may be marked with different shades of color, for example, the first pattern may be marked with light red and the second pattern with dark red.
In the embodiment of the disclosure, different target progress bars corresponding to different target time periods can be marked in different modes, so that a user can distinguish the target time periods of different degrees more intuitively, the user can watch videos conveniently and selectively, and user experience is improved. In addition, the target video segments marked in different modes can be used for subsequent video clips, and a video editor can perform video clips according to the target video segments marked in different modes, so that the video clipping efficiency is improved.
The following description is made with reference to current interaction data including a current interaction number and a current interaction keyword number, previous interaction data including a previous interaction number and a previous interaction keyword number, and next interaction data including a next interaction number and a next interaction keyword number, but the disclosure is not limited thereto.
In the embodiment of the present disclosure, the number of interactions is the number of interactions between a user watching a live broadcast and a main broadcast or other users watching the live broadcast, or the number of interactions is the number of interactions between a user watching a video and a video publisher or other users watching the video. For example, the number of praise, number of comments, number of barracks, etc. for a user watching a live or video.
In the embodiment of the present disclosure, the number of the interactive keywords is the number of the interactive keywords between the user watching the live broadcast and the anchor broadcast or other users watching the live broadcast, or the number of the interactive keywords is the number of the interactive keywords between the user watching the video and the video publisher or other users watching the video. For example, the number of favorable keywords, the number of comment keywords, the number of bullet keywords, etc. of the user viewing the live broadcast or video.
In the embodiment of the present disclosure, the interactive keywords may be set according to actual situations, for example, the interactive keywords may be: "haha", "233", "anchor cow", etc.
In the following description, the number of interactions is the number of barrage, and the number of interaction keywords is the number of barrage keywords, but the disclosure is not limited thereto.
In an exemplary embodiment, if the current resource data is greater than the previous resource data and the next resource data; or, if the current interaction data is greater than the previous interaction data and the next interaction data, determining the first target time period according to the previous sliding window and the next sliding window may include: if the current resource data is larger than the previous resource data and the next resource data; or the current interaction number is larger than the previous interaction number and the next interaction number; or, if the number of the current interactive keywords is greater than the number of the previous interactive keywords and the number of the next interactive keywords, determining a first target time period according to the previous sliding window and the next sliding window.
In the embodiment of the present disclosure, it may be respectively determined whether the current gift data is simultaneously greater than the previous gift data and the next gift data, whether the current barrage number is simultaneously greater than the previous barrage number and the next barrage number, and whether the current barrage keyword number is simultaneously greater than the previous barrage keyword number and the next barrage keyword number, if the three conditions are only one of the three conditions, it may be determined that a probability that a time period formed by the previous sliding window and the next sliding window is a viewpoint is small, and a time period formed by the previous sliding window and the next sliding window may be determined as the first target time period.
In an exemplary embodiment, the determining the second target time period according to the previous sliding window and the next sliding window if the current resource data is greater than the previous resource data and the next resource data, and the current interaction data is greater than the previous interaction data and greater than the next interaction data may include: if the current resource data is larger than the previous resource data and the next resource data, the current interaction number is larger than the previous interaction number and the next interaction number, and the current interaction keyword number is smaller than or equal to the previous interaction keyword number or the next interaction keyword number, determining a first target sub-time period according to the previous sliding window and the next sliding window; if the current resource data is larger than the previous resource data and the next resource data, the number of the current interactive keywords is larger than the number of the previous interactive keywords and the number of the next interactive keywords, and the number of the current interactive keywords is smaller than or equal to the number of the previous interactive keywords or the number of the next interactive keywords, determining a first target sub-time period according to the previous sliding window and the next sliding window; and if the current interaction number is larger than the previous interaction number and the next interaction number, the current interaction keyword number is larger than the previous interaction keyword number and the next interaction keyword number, and the current resource data is smaller than or equal to the previous current resource data or the next current resource data, determining a first target sub-time period according to the previous sliding window and the next sliding window.
In the embodiment of the present disclosure, it may be respectively determined whether the current gift data is simultaneously greater than the previous gift data and the next gift data, whether the current barrage number is simultaneously greater than the previous barrage number and the next barrage number, and whether the current barrage keyword number is simultaneously greater than the previous barrage keyword number and the next barrage keyword number, if the three conditions are only two of the above three conditions, the probability that the time period formed by the previous sliding window and the next sliding window is regarded as the viewpoint is medium, and the time period formed by the previous sliding window and the next sliding window may be determined as the first target sub-time period.
In an exemplary embodiment, if the current resource data is greater than the previous resource data and the next resource data, and the current interaction data is greater than the previous interaction data and greater than the next interaction data, determining the second target time period according to the previous sliding window and the next sliding window, which may further include: if the current resource data is larger than the previous resource data and the next resource data; moreover, the current interaction quantity is greater than the previous interaction quantity and the next interaction quantity; and if the number of the current interactive keywords is larger than the number of the last interactive keywords and the number of the next interactive keywords, determining a second target sub-time period according to the last sliding window and the next sliding window.
In the embodiment of the present disclosure, it may be respectively determined whether the current gift data is simultaneously greater than the previous gift data and the next gift data, whether the current barrage number is simultaneously greater than the previous barrage number and the next barrage number, and whether the current barrage keyword number is simultaneously greater than the previous barrage keyword number and the next barrage keyword number, if the three conditions are all satisfied, the probability that the time period formed by the previous sliding window and the next sliding window is taken as the viewpoint is relatively large, and the time period formed by the previous sliding window and the next sliding window may be determined as the second target sub-time period.
In the embodiment of the present disclosure, the target video segment corresponding to the second target sub-time period has more user active behaviors than the target video segment corresponding to the first target sub-time period, that is, the probability that the target video segment corresponding to the second target sub-time period is used as a viewpoint is higher than the probability that the target video segment corresponding to the first target sub-time period is used as a viewpoint; the target video clips corresponding to the first target sub-time period have more user active behaviors than the target video clips corresponding to the first target time period, that is, the probability that the target video clips corresponding to the first target sub-time period are used as viewpoints is higher than that of the target video clips corresponding to the first target time period.
In an exemplary embodiment, the step S240 may include: marking a first target progress bar corresponding to a first target time period in the progress bar of the video to be processed by using a first mode; marking a second target progress bar corresponding to the first target sub-time period in the progress bar of the video to be processed by using a second mode; and marking a third target progress bar corresponding to the second target sub-time period in the progress bar of the video to be processed by using a third mode.
In the embodiment of the present disclosure, different target progress bars corresponding to different target time periods may be marked in different manners.
In the embodiment of the present disclosure, the first manner, the second manner, and the third manner may be, for example, using different color marks, for example, using a green mark for the first manner, using a yellow mark for the second manner, and using a red mark for the third manner; alternatively, the first, second and third modes may be marked with different shades of color, for example, the first mode uses a light red mark, the second mode uses a positive red mark, and the third mode uses a dark red mark.
Fig. 5 is a diagram illustrating operational feedback data for a sliding window of a pending video according to an example.
Taking a video to be processed recorded in a live broadcast process as an example, referring to fig. 5, line graphs of the number of bullet screens 501, the number of bullet screen keywords 502, and the number of gifts 503 are shown, where the abscissa is live broadcast time, and the ordinate is window content amount.
As can be seen from fig. 5, the bullet screen number 501 and the gift number 503 when the abscissa is 200s are maximum values, and the preset time period before and after 200s can be considered as the first target sub-time period; when the abscissa is 300s, the bullet screen keyword number 502 is a maximum value, and a preset time period before and after 300s can be considered as a first target time period; when the abscissa is 500s, the number 501 of the bullet screens and the number 502 of the bullet screen keywords are maximum values, and the preset time period before and after 500s can be considered as a first target sub-time period; the bullet screen keyword number 502 when the abscissa is 600s is the maximum value, and the preset time period before and after 600s may be considered as the first target time period.
In the embodiment of the disclosure, different target progress bars corresponding to different target time periods can be marked in different ways, so that a user can distinguish the target time periods in different degrees more intuitively, the user can watch videos conveniently and selectively, and user experience is improved. In addition, the target video segments marked in different modes can be used for subsequent video clips, and a video editor can perform video clips according to the target video segments marked in different modes, so that the video clipping efficiency is improved.
It is noted that the above-mentioned figures are merely schematic illustrations of processes involved in methods according to exemplary embodiments of the disclosure and are not intended to be limiting. It will be readily understood that the processes shown in the above figures are not intended to indicate or limit the chronological order of the processes. In addition, it is also readily understood that these processes may be performed, for example, synchronously or asynchronously in multiple modules.
Fig. 6 is a block diagram illustrating a video processing device according to an example embodiment. Referring to fig. 6, the apparatus 600 may include an obtaining module 610, an obtaining module 620, a determining module 630, and a marking module 640.
Wherein the obtaining module 610 is configured to perform obtaining a video to be processed; the obtaining module 620 is configured to perform obtaining current operation feedback data of a current sliding window of the video to be processed, previous operation feedback data of a previous sliding window, and next operation feedback data of a next sliding window, respectively, where the current sliding window and the previous sliding window at least partially overlap, and the current sliding window and the next sliding window at least partially overlap; the determining module 630 is configured to determine the target time period according to the previous sliding window and the next sliding window if the current operation feedback data is greater than the previous operation feedback data and the next operation feedback data; the marking module 640 is configured to perform marking of a target video segment corresponding to a target time period in the video to be processed.
In some exemplary embodiments of the present disclosure, the current operation feedback data includes current resource data and current interaction data of the current sliding window, the previous operation feedback data includes previous resource data and previous interaction data of the previous sliding window, and the next operation feedback data includes next resource data and next interaction data of the next sliding window; the target time period comprises a first target time period; the determining module 630 is further configured to perform the determining if the current resource data is greater than the previous resource data and the next resource data; or, if the current interaction data is larger than the previous interaction data and the next interaction data, determining a first target time period according to the previous sliding window and the next sliding window.
In some exemplary embodiments of the present disclosure, the current interaction data includes a current interaction number and a current interaction keyword number, the previous interaction data includes a previous interaction number and a previous interaction keyword number, and the next interaction data includes a next interaction number and a next interaction keyword number; the determining module 630 is further configured to perform the determining if the current resource data is greater than the previous resource data and the next resource data; or the current interaction number is greater than the previous interaction number and the next interaction number; or, if the number of the current interactive keywords is greater than the number of the previous interactive keywords and the number of the next interactive keywords, determining the first target time period according to the previous sliding window and the next sliding window.
In some exemplary embodiments of the present disclosure, the target time period further comprises a second target time period; the determining module 630 is further configured to determine a second target time period according to the previous sliding window and the next sliding window if the current resource data is greater than the previous resource data and the next resource data, and the current interaction data is greater than the previous interaction data and greater than the next interaction data.
In some exemplary embodiments of the present disclosure, the marking module 640 is further configured to perform marking, in the progress bar of the video to be processed, a first target progress bar corresponding to the first target time period in a first manner; the marking module 640 is further configured to mark a second target progress bar corresponding to the second target time period in a progress bar of the video to be processed by using a second mode.
In some exemplary embodiments of the present disclosure, the current interaction data includes a current interaction number and a current interaction keyword number, the previous interaction data includes a previous interaction number and a previous interaction keyword number, and the next interaction data includes a next interaction number and a next interaction keyword number; the second target time period comprises a first target sub-time period; the determining module 630 is further configured to determine the first target sub-time period according to the previous sliding window and the next sliding window if the current resource data is greater than the previous resource data and the next resource data, the current number of interactions is greater than the previous number of interactions and the next number of interactions, and the current number of interactive keywords is less than or equal to the previous number of interactive keywords or the next number of interactive keywords; the determining module 630 is further configured to determine the first target sub-time period according to the previous sliding window and the next sliding window if the current resource data is greater than the previous resource data and the next resource data, the number of the current interactive keywords is greater than the number of the previous interactive keywords and the number of the next interactive keywords, and the number of the current interactive keywords is less than or equal to the number of the previous interactive keywords or the number of the next interactive keywords; the determining module 630 is further configured to determine the first target sub-time period according to the previous sliding window and the next sliding window if the current number of interactions is greater than the previous number of interactions and the next number of interactions, the current number of interaction keywords is greater than the previous number of interaction keywords and the next number of interaction keywords, and the current resource data is less than or equal to the previous current resource data or the next current resource data.
In some exemplary embodiments of the present disclosure, the second target time period further comprises a second target sub-time period; the determining module 630 is further configured to perform the determining if the current resource data is greater than the previous resource data and the next resource data; the current interaction quantity is greater than the previous interaction quantity and the next interaction quantity; and if the number of the current interactive keywords is greater than the number of the last interactive keywords and the number of the next interactive keywords, determining the second target sub-time period according to the last sliding window and the next sliding window.
In some exemplary embodiments of the present disclosure, the marking module 640 is further configured to perform marking, in the progress bar of the video to be processed, a first target progress bar corresponding to the first target time period in a first manner; the marking module is further configured to mark a second target progress bar corresponding to the first target sub-time period in a progress bar of the video to be processed in a second mode; the marking module is further configured to mark a third target progress bar corresponding to the second target sub-time period in the progress bar of the video to be processed in a third mode.
In some exemplary embodiments of the present disclosure, the determining module 630 is further configured to perform determining a start time of the last sliding window as a start time of the target time period, and determining an end time of the next sliding window as an end time of the target time period; and determining the target time period according to the starting time of the target time period and the ending time of the target time period.
In some exemplary embodiments of the present disclosure, the current sliding window, the previous sliding window, and the next sliding window have the same window length; the starting time of the current sliding window is the middle time of the last sliding window, and the ending time of the current sliding window is the middle time of the next sliding window.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
An electronic device 700 according to such an embodiment of the present disclosure is described below with reference to fig. 7. The electronic device 700 shown in fig. 7 is only an example and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 7, electronic device 700 is embodied in the form of a general purpose computing device. The components of the electronic device 700 may include, but are not limited to: the at least one processing unit 710, the at least one memory unit 720, a bus 730 connecting different system components (including the memory unit 720 and the processing unit 710), and a display unit 740.
Where the memory unit stores program code, the program code may be executed by the processing unit 710 such that the processing unit 710 performs the steps according to various exemplary embodiments of the present disclosure as described in the above-mentioned "exemplary methods" section of this specification. For example, processing unit 710 may perform various steps as shown in fig. 2.
As another example, the electronic device may implement the various steps shown in FIG. 2.
The storage unit 720 may include readable media in the form of volatile memory units, such as a random access memory unit (RAM)721 and/or a cache memory unit 722, and may further include a read only memory unit (ROM) 723.
The memory unit 720 may also include programs/utilities 724 having a set (at least one) of program modules 725, such program modules 725 including, but not limited to: an operating system, one or more application programs, other program modules, and program data, each of which, or some combination thereof, may comprise an implementation of a network environment.
Bus 730 may be any representation of one or more of several types of bus structures, including a memory unit bus or memory unit controller, a peripheral bus, an accelerated graphics port, a processing unit, or a local bus using any of a variety of bus architectures.
The electronic device 700 may also communicate with one or more external devices 770 (e.g., keyboard, pointing device, bluetooth device, etc.), with one or more devices that enable a user to interact with the electronic device 700, and/or with any devices (e.g., router, modem, etc.) that enable the electronic device 700 to communicate with one or more other computing devices. Such communication may occur via an input/output (I/O) interface 750. Also, the electronic device 700 may communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network such as the internet) via the network adapter 760. As shown, the network adapter 760 communicates with the other modules of the electronic device 700 via the bus 730. It should be appreciated that although not shown in the figures, other hardware and/or software modules may be used in conjunction with the electronic device 700, including but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, and data backup storage systems, among others.
Through the above description of the embodiments, those skilled in the art will readily understand that the exemplary embodiments described herein may be implemented by software, or by software in combination with necessary hardware. Therefore, the technical solution according to the embodiments of the present disclosure may be embodied in the form of a software product, which may be stored in a non-volatile storage medium (which may be a CD-ROM, a usb disk, a removable hard disk, etc.) or on a network, and includes several instructions to enable a computing device (which may be a personal computer, a server, a terminal device, or a network device, etc.) to execute the method according to the embodiments of the present disclosure.
In an exemplary embodiment, a computer-readable storage medium comprising instructions, such as a memory comprising instructions, executable by a processor of an apparatus to perform the above-described method is also provided. Alternatively, the computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
In an exemplary embodiment, there is also provided a computer program product comprising computer programs/instructions which, when executed by a processor, implement the video processing method in the above-described embodiments.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (10)

1. A video processing method, comprising:
acquiring a video to be processed;
respectively obtaining current operation feedback data of a current sliding window of the video to be processed, previous operation feedback data of a previous sliding window and next operation feedback data of a next sliding window, wherein the current sliding window is at least partially overlapped with the previous sliding window, and the current sliding window is at least partially overlapped with the next sliding window;
if the current operation feedback data is larger than the last operation feedback data and the next operation feedback data, determining a target time period according to the last sliding window and the next sliding window;
and marking a target video clip corresponding to the target time period in the video to be processed.
2. The video processing method of claim 1, wherein the current operation feedback data comprises current resource data and current interaction data of the current sliding window, the previous operation feedback data comprises previous resource data and previous interaction data of the previous sliding window, and the next operation feedback data comprises next resource data and next interaction data of the next sliding window; the target time period comprises a first target time period;
wherein, if the current operation feedback data is greater than the next operation feedback data, determining a target time period according to the previous sliding window and the next sliding window includes:
if the current resource data is larger than the last resource data and the next resource data; or, if the current interaction data is larger than the previous interaction data and the next interaction data, determining a first target time period according to the previous sliding window and the next sliding window.
3. The video processing method according to claim 2, wherein the current interaction data comprises a current interaction number and a current interaction keyword number, the previous interaction data comprises a previous interaction number and a previous interaction keyword number, and the next interaction data comprises a next interaction number and a next interaction keyword number;
if the current resource data is larger than the previous resource data and the next resource data; or, if the current interaction data is greater than the previous interaction data and the next interaction data, determining a first target time period according to the previous sliding window and the next sliding window, including:
if the current resource data is larger than the last resource data and the next resource data; or the current interaction number is greater than the previous interaction number and the next interaction number; or, if the number of the current interactive keywords is greater than the number of the previous interactive keywords and the number of the next interactive keywords, determining the first target time period according to the previous sliding window and the next sliding window.
4. The video processing method of claim 2, wherein the target time period further comprises a second target time period;
wherein, if the current operation feedback data is greater than the previous operation feedback data and the next operation feedback data, determining a target time period according to the previous sliding window and the next sliding window, further comprising:
and if the current resource data is larger than the previous resource data and the next resource data, and the current interaction data is larger than the previous interaction data and the next interaction data, determining a second target time period according to the previous sliding window and the next sliding window.
5. The video processing method according to claim 4, wherein the current interaction data comprises a current interaction number and a current interaction keyword number, the previous interaction data comprises a previous interaction number and a previous interaction keyword number, and the next interaction data comprises a next interaction number and a next interaction keyword number; the second target time period comprises a first target sub-time period;
wherein if the current resource data is greater than the previous resource data and the next resource data, and the current interaction data is greater than the previous interaction data and greater than the next interaction data, determining a second target time period according to the previous sliding window and the next sliding window, including:
if the current resource data is larger than the previous resource data and the next resource data, the current interaction number is larger than the previous interaction number and the next interaction number, and the current interaction keyword number is smaller than or equal to the previous interaction keyword number or the next interaction keyword number, determining the first target sub-time period according to the previous sliding window and the next sliding window;
if the current resource data is larger than the previous resource data and the next resource data, the number of the current interactive keywords is larger than the number of the previous interactive keywords and the number of the next interactive keywords, and the number of the current interactive keywords is smaller than or equal to the number of the previous interactive keywords or the number of the next interactive keywords, determining the first target sub-time period according to the previous sliding window and the next sliding window;
and if the current interactive quantity is greater than the last interactive quantity and the next interactive quantity, the current interactive keyword quantity is greater than the last interactive keyword quantity and the next interactive keyword quantity, and the current resource data is less than or equal to the last current resource data or the next current resource data, determining the first target sub-time period according to the last sliding window and the next sliding window.
6. The video processing method according to claim 5, wherein the second target time period further comprises a second target sub-time period;
wherein, if the current resource data is greater than the previous resource data and the next resource data, and the current interaction data is greater than the previous interaction data and greater than the next interaction data, determining a second target time period according to the previous sliding window and the next sliding window, further comprising:
if the current resource data is larger than the previous resource data and the next resource data; the current interaction quantity is greater than the previous interaction quantity and the next interaction quantity; and if the number of the current interactive keywords is greater than the number of the last interactive keywords and the number of the next interactive keywords, determining the second target sub-time period according to the last sliding window and the next sliding window.
7. A video processing apparatus, comprising:
an acquisition module configured to perform acquiring a video to be processed;
an obtaining module configured to perform obtaining current operation feedback data of a current sliding window of the video to be processed, previous operation feedback data of a previous sliding window, and next operation feedback data of a next sliding window, respectively, where the current sliding window and the previous sliding window are at least partially overlapped, and the current sliding window and the next sliding window are at least partially overlapped;
a determining module configured to determine a target time period according to the previous sliding window and the next sliding window if the current operation feedback data is greater than the previous operation feedback data and the next operation feedback data;
and the marking module is configured to mark the target video segment corresponding to the target time period in the video to be processed.
8. An electronic device, comprising:
a processor;
a memory for storing the processor-executable instructions;
wherein the processor is configured to execute the executable instructions to implement the video processing method of any of claims 1 to 6.
9. A computer-readable storage medium whose instructions, when executed by a processor of an electronic device, enable the electronic device to perform the video processing method of any of claims 1 to 6.
10. A computer program product comprising a computer program, characterized in that the computer program realizes the video processing method of any of claims 1 to 6 when executed by a processor.
CN202210271174.6A 2022-03-18 2022-03-18 Video processing method and related equipment Pending CN114630141A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210271174.6A CN114630141A (en) 2022-03-18 2022-03-18 Video processing method and related equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210271174.6A CN114630141A (en) 2022-03-18 2022-03-18 Video processing method and related equipment

Publications (1)

Publication Number Publication Date
CN114630141A true CN114630141A (en) 2022-06-14

Family

ID=81901774

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210271174.6A Pending CN114630141A (en) 2022-03-18 2022-03-18 Video processing method and related equipment

Country Status (1)

Country Link
CN (1) CN114630141A (en)

Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104994425A (en) * 2015-06-30 2015-10-21 北京奇艺世纪科技有限公司 Video labeling method and device
CN107071587A (en) * 2017-04-25 2017-08-18 腾讯科技(深圳)有限公司 The acquisition methods and device of video segment
CN108307230A (en) * 2018-02-07 2018-07-20 北京奇艺世纪科技有限公司 A kind of extracting method and device of video highlight segment
CN108924576A (en) * 2018-07-10 2018-11-30 武汉斗鱼网络科技有限公司 A kind of video labeling method, device, equipment and medium
CN109547859A (en) * 2017-09-21 2019-03-29 腾讯科技(深圳)有限公司 The determination method and apparatus of video clip
CN109729435A (en) * 2017-10-27 2019-05-07 优酷网络技术(北京)有限公司 The extracting method and device of video clip
CN110019421A (en) * 2018-07-27 2019-07-16 山东大学 A kind of time series data classification method based on data characteristics segment
CN110234037A (en) * 2019-05-16 2019-09-13 北京百度网讯科技有限公司 Generation method and device, the computer equipment and readable medium of video clip
CN110248258A (en) * 2019-07-18 2019-09-17 腾讯科技(深圳)有限公司 Recommended method, device, storage medium and the computer equipment of video clip
CN111050205A (en) * 2019-12-13 2020-04-21 广州酷狗计算机科技有限公司 Video clip acquisition method, device, apparatus, storage medium, and program product
CN111385606A (en) * 2018-12-28 2020-07-07 Tcl集团股份有限公司 Video preview method and device and intelligent terminal
CN111479168A (en) * 2020-04-14 2020-07-31 腾讯科技(深圳)有限公司 Method, device, server and medium for marking multimedia content hot spot
CN111711839A (en) * 2020-05-27 2020-09-25 杭州云端文化创意有限公司 Film selection display method based on user interaction numerical value
CN112861750A (en) * 2021-02-22 2021-05-28 平安科技(深圳)有限公司 Video extraction method, device, equipment and medium based on inflection point detection
CN113747241A (en) * 2021-09-13 2021-12-03 深圳市易平方网络科技有限公司 Video clip intelligent editing method, device and terminal based on bullet screen statistics
CN113766282A (en) * 2021-10-20 2021-12-07 上海哔哩哔哩科技有限公司 Live video processing method and device

Patent Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104994425A (en) * 2015-06-30 2015-10-21 北京奇艺世纪科技有限公司 Video labeling method and device
CN107071587A (en) * 2017-04-25 2017-08-18 腾讯科技(深圳)有限公司 The acquisition methods and device of video segment
CN109547859A (en) * 2017-09-21 2019-03-29 腾讯科技(深圳)有限公司 The determination method and apparatus of video clip
CN109729435A (en) * 2017-10-27 2019-05-07 优酷网络技术(北京)有限公司 The extracting method and device of video clip
CN108307230A (en) * 2018-02-07 2018-07-20 北京奇艺世纪科技有限公司 A kind of extracting method and device of video highlight segment
CN108924576A (en) * 2018-07-10 2018-11-30 武汉斗鱼网络科技有限公司 A kind of video labeling method, device, equipment and medium
CN110019421A (en) * 2018-07-27 2019-07-16 山东大学 A kind of time series data classification method based on data characteristics segment
CN111385606A (en) * 2018-12-28 2020-07-07 Tcl集团股份有限公司 Video preview method and device and intelligent terminal
CN110234037A (en) * 2019-05-16 2019-09-13 北京百度网讯科技有限公司 Generation method and device, the computer equipment and readable medium of video clip
CN110248258A (en) * 2019-07-18 2019-09-17 腾讯科技(深圳)有限公司 Recommended method, device, storage medium and the computer equipment of video clip
CN111050205A (en) * 2019-12-13 2020-04-21 广州酷狗计算机科技有限公司 Video clip acquisition method, device, apparatus, storage medium, and program product
CN111479168A (en) * 2020-04-14 2020-07-31 腾讯科技(深圳)有限公司 Method, device, server and medium for marking multimedia content hot spot
CN111711839A (en) * 2020-05-27 2020-09-25 杭州云端文化创意有限公司 Film selection display method based on user interaction numerical value
CN112861750A (en) * 2021-02-22 2021-05-28 平安科技(深圳)有限公司 Video extraction method, device, equipment and medium based on inflection point detection
CN113747241A (en) * 2021-09-13 2021-12-03 深圳市易平方网络科技有限公司 Video clip intelligent editing method, device and terminal based on bullet screen statistics
CN113766282A (en) * 2021-10-20 2021-12-07 上海哔哩哔哩科技有限公司 Live video processing method and device

Similar Documents

Publication Publication Date Title
US10499035B2 (en) Method and system of displaying a popping-screen
CN110234037B (en) Video clip generation method and device, computer equipment and readable medium
EP3902280A1 (en) Short video generation method and platform, electronic device, and storage medium
CN103781522A (en) Methods and systems for generating and joining shared experience
CN111225236B (en) Method and device for generating video cover, electronic equipment and computer-readable storage medium
CN111654746A (en) Video frame insertion method and device, electronic equipment and storage medium
CN111935506B (en) Method and apparatus for determining repeating video frames
CN112000911B (en) Page management method, device, electronic equipment and storage medium
CN111177545A (en) Advertisement putting method, platform, electronic device and storage medium
US20210029183A1 (en) Network resource oriented data communication
EP4080507A1 (en) Method and apparatus for editing object, electronic device and storage medium
CN111669647B (en) Real-time video processing method, device and equipment and storage medium
CN114630141A (en) Video processing method and related equipment
CN103984699A (en) Pushing method and pushing device for promotion information
CN111294662A (en) Bullet screen generation method, device, equipment and storage medium
CN105487769A (en) Media file display method and device and electronic equipment
CN114040216B (en) Live broadcast room recommendation method, medium, device and computing equipment
EP3770863A1 (en) Multiplex pixel distribution for multi-machine rendering
CN115190357A (en) Video abstract generation method and device
CN113490062A (en) Video barrage sorting method and device, server and storage medium
CN109255641B (en) Business object processing method and device
US20210183121A1 (en) Filling empty pixels
CN110703971A (en) Method and device for publishing information
Kim Systematic Innovation Mounted Software Development Process and Intuitive Project Management Framework for Lean Startups
CN115643462B (en) Interactive animation display method and device, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination