CN113784152A - Video processing method and storage medium - Google Patents

Video processing method and storage medium Download PDF

Info

Publication number
CN113784152A
CN113784152A CN202110821055.9A CN202110821055A CN113784152A CN 113784152 A CN113784152 A CN 113784152A CN 202110821055 A CN202110821055 A CN 202110821055A CN 113784152 A CN113784152 A CN 113784152A
Authority
CN
China
Prior art keywords
video
target
candidate
video frame
evaluation index
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110821055.9A
Other languages
Chinese (zh)
Inventor
杨涛
任沛然
谢宣松
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alibaba Damo Institute Hangzhou Technology Co Ltd
Original Assignee
Alibaba Damo Institute Hangzhou Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba Damo Institute Hangzhou Technology Co Ltd filed Critical Alibaba Damo Institute Hangzhou Technology Co Ltd
Priority to CN202110821055.9A priority Critical patent/CN113784152A/en
Publication of CN113784152A publication Critical patent/CN113784152A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/2187Live feed
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/44008Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics in the video stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/442Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed, the storage space available from the internal hard disk
    • H04N21/44204Monitoring of content usage, e.g. the number of times a movie has been viewed, copied or the amount which has been watched
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/85Assembly of content; Generation of multimedia applications
    • H04N21/854Content authoring
    • H04N21/8549Creating video summaries, e.g. movie trailer

Landscapes

  • Engineering & Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Security & Cryptography (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Television Signal Processing For Recording (AREA)

Abstract

The invention discloses a video processing method and a storage medium. Wherein, the method comprises the following steps: acquiring a target video to be processed, and extracting a plurality of candidate video frames from the target video; acquiring a target evaluation index of each candidate video frame, wherein the target evaluation index is used for representing the quality degree of the corresponding candidate video frame; selecting at least one target video frame from the candidate video frames based on the target evaluation index of each candidate video frame, wherein the target evaluation index of the at least one target video frame exceeds the target evaluation index of the candidate video frames except the at least one target video frame in the candidate video frames; a video cover for the target video is generated based on the at least one target video frame. The invention solves the technical problem of low quality of video covers in the related art.

Description

Video processing method and storage medium
Technical Field
The present invention relates to the field of video processing, and in particular, to a video processing method and a storage medium.
Background
Nowadays, video plays an increasingly important role in people's life. In the face of massive video, the content of different videos is usually required to be shown and recommended by using a video cover to attract the click and watch of a user. At present, video frames are randomly captured from a video, and the captured video frames are used as a cover picture of the video. However, since this method randomly clips video frames from a video, the quality of the clipped video frames cannot be guaranteed, which causes the quality of the generated video cover map to be unstable, and further makes it difficult to effectively guarantee the quality of the video cover.
In view of the above problems, no effective solution has been proposed.
Disclosure of Invention
The embodiment of the invention provides a video processing method and a storage medium, which at least solve the technical problem of low quality of video covers in the related art.
According to an aspect of an embodiment of the present invention, there is provided a video processing method, including: acquiring a target video to be processed, and extracting a plurality of candidate video frames from the target video; acquiring a target evaluation index of each candidate video frame, wherein the target evaluation index is used for representing the quality degree of the corresponding candidate video frame; selecting at least one target video frame from the candidate video frames based on the target evaluation index of each candidate video frame, wherein the target evaluation index of the at least one target video frame exceeds the target evaluation index of the candidate video frames except the at least one target video frame in the candidate video frames; a video cover for the target video is generated based on the at least one target video frame.
Optionally, determining a quality assessment indicator for each candidate video frame based on the at least one sub-video information comprises: and under the condition that the number of the at least one piece of sub-video information is multiple, carrying out weighted summation on the numerical value of each piece of sub-video information and the corresponding weight to obtain the quality evaluation index of each candidate video frame, wherein the weight is used for expressing the influence degree of each piece of sub-video information on the quality evaluation index of each candidate video frame.
Optionally, the sub-video information comprises at least one of: exposure level of each candidate video frame; a degree of ambiguity for each candidate video frame; the degree to which the content of each candidate video frame matches the target content; parameters of the target object in each candidate video frame.
Optionally, selecting at least one target video frame from the plurality of candidate video frames based on the target evaluation index of each candidate video frame includes: ranking the plurality of candidate video frames based on the quality assessment index of each candidate video frame; and selecting at least one target video frame from the sorted candidate video frames.
Optionally, the editing is performed on at least one target video frame, respectively, and includes at least one of: respectively cutting at least one target video frame; respectively erasing target display information in at least one target video frame; respectively beautifying at least one target video frame; and respectively generating at least one target video frame into a dynamic image.
Optionally, before extracting a plurality of candidate video frames from the target video, the method further includes: preprocessing a target video; extracting a plurality of candidate video frames from a target video, including: and extracting a plurality of candidate video frames from the preprocessed target video.
Optionally, the target video is preprocessed, and the target video is preprocessed through at least one preprocessing module respectively to obtain at least one preprocessing result, wherein the at least one preprocessing module corresponds to the at least one preprocessing result one by one, each preprocessing module is established based on third requirement data, and the third requirement data is used for representing requirements for preprocessing the target video.
Optionally, the method further comprises: and responding to the third adjusting instruction, and adjusting at least one target preprocessing module in the at least one preprocessing module.
Optionally, the target video is preprocessed, including at least one of: performing screenshot processing on a target video; detecting a target object in a target video; scoring an image of a target video; carrying out segmentation processing on a target video; and clustering the target video.
According to another aspect of the embodiments of the present invention, there is also provided another video processing method, including: responding to an image input instruction acting on an operation interface, and acquiring a target video to be processed; and responding to a cover generation instruction acting on the operation interface, and displaying a video cover of the target video on the operation interface, wherein the video cover is generated by at least one target video frame, the at least one target video frame is a target evaluation index based on each candidate video frame in a plurality of candidate video frames, the target evaluation index is selected from the candidate video frames, the candidate video frames are extracted from the target video, the target evaluation index is used for representing the quality degree of the corresponding candidate video frame, and the target evaluation index of the at least one target video frame exceeds the target evaluation index of the candidate video frames except the at least one target video frame in the candidate video frames.
According to another aspect of the embodiments of the present invention, there is also provided another video processing method, including: acquiring a monitoring video of a road section, and extracting a plurality of candidate video frames from the monitoring video, wherein the plurality of candidate video frames comprise information of vehicles running through the road section; acquiring a target evaluation index of each candidate video frame, wherein the target evaluation index is used for representing the quality degree of the corresponding candidate video frame; selecting at least one target video frame from the candidate video frames based on the target evaluation index of each candidate video frame, wherein the at least one target video frame is better than the candidate video frames except the at least one target video frame in the candidate video frames; a video cover of the surveillance video is generated based on the at least one target video frame, wherein the video cover includes target vehicle information for traveling through the road segment.
According to another aspect of the embodiments of the present invention, there is also provided another video processing method, including: the method comprises the steps of obtaining a teaching video, and extracting a plurality of candidate video frames from the teaching video, wherein the plurality of candidate video frames comprise object information of different teaching objects, and the teaching objects comprise teachers and teaching contents of different types; acquiring a target evaluation index of each candidate video frame, wherein the target evaluation index is used for representing the quality degree of the corresponding candidate video frame; selecting at least one target video frame from the candidate video frames based on the target evaluation index of each candidate video frame, wherein the target evaluation index of the at least one target video frame exceeds the target evaluation index of the candidate video frames except the at least one target video frame in the candidate video frames; a video cover of the instructional video is generated based on the at least one target video frame, wherein the video cover includes object information of the target instructional object.
According to another aspect of the embodiments of the present invention, there is also provided another video processing method, including: acquiring a live broadcast video from a live broadcast platform, and extracting a plurality of candidate video frames from the live broadcast video; acquiring a target evaluation index of each candidate video frame, wherein the target evaluation index is used for representing the quality degree of the corresponding candidate video frame; selecting at least one target video frame from the candidate video frames based on the target evaluation index of each candidate video frame, wherein the target evaluation index of the at least one target video frame exceeds the candidate video frames except the at least one target video frame in the candidate video frames; generating a video cover of the live video based on the at least one target video frame; and issuing the video cover of the live video to a live platform for displaying.
According to another aspect of the embodiments of the present invention, there is also provided another video processing method, including: a client acquires a target video to be processed; the client uploads the target video to the server; the client receives a video cover of a target video returned by the server, wherein the video cover is generated by the server based on at least one target video frame, the at least one target video frame is generated by the server based on a target evaluation index of each candidate video frame in a plurality of candidate video frames, the at least one target video frame is selected from the plurality of candidate video frames, the plurality of candidate video frames are extracted from the target video by the server, the target evaluation index is used for representing the quality degree of the corresponding candidate video frame, and the at least one target video frame is better than the candidate video frames except the at least one target video frame in the plurality of candidate video frames.
According to another aspect of the embodiments of the present invention, there is also provided another video processing method, including: acquiring a target video to be processed by calling a first interface, and extracting a plurality of candidate video frames from the target video, wherein the first interface comprises a first parameter, and the parameter value of the first parameter is the target video; acquiring a target evaluation index of each candidate video frame, wherein the target evaluation index is used for representing the quality degree of the corresponding candidate video frame; selecting at least one target video frame from the candidate video frames based on the target evaluation index of each candidate video frame, wherein the target evaluation index of the at least one target video frame exceeds the target evaluation index of the candidate video frames except the at least one target video frame in the candidate video frames; generating a video cover of the target video based on the at least one target video frame; and outputting the video cover of the target video by calling a second interface, wherein the second interface comprises a second parameter, and the parameter value of the second parameter is the video cover of the target video.
According to another aspect of the embodiments of the present application, there is also provided a video processing apparatus, including: the device comprises a first acquisition unit, a second acquisition unit and a processing unit, wherein the first acquisition unit is used for acquiring a target video to be processed and extracting a plurality of candidate video frames from the target video; the second acquisition unit is used for acquiring a target evaluation index of each candidate video frame, wherein the target evaluation index is used for expressing the quality degree of the corresponding candidate video frame; the first selecting unit is used for selecting at least one target video frame from the candidate video frames based on the target evaluation index of each candidate video frame, wherein the target evaluation index of the at least one target video frame exceeds the target evaluation index of the candidate video frames except the at least one target video frame in the candidate video frames; a first generating unit for generating a video cover of the target video based on the at least one target video frame.
According to another aspect of the embodiments of the present invention, there is also provided another video processing apparatus including: the third acquisition unit is used for responding to an image input instruction acting on the operation interface and acquiring a target video to be processed; the first display unit is used for responding to a cover generation instruction acting on the operation interface and displaying a video cover of a target video on the operation interface, wherein the video cover is generated by at least one target video frame, the at least one target video frame is a target evaluation index based on each candidate video frame in a plurality of candidate video frames, the target evaluation index is selected from the candidate video frames, the candidate video frames are extracted from the target video, the target evaluation index is used for representing the quality degree of the corresponding candidate video frame, and the target evaluation index of the at least one target video frame exceeds the target evaluation index of the candidate video frames except the at least one target video frame in the candidate video frames.
According to another aspect of the embodiments of the present invention, there is also provided another video processing apparatus including: the fourth acquisition unit is used for acquiring a monitoring video of the road section and extracting a plurality of candidate video frames from the monitoring video, wherein the candidate video frames comprise vehicle information of the road section; a fifth obtaining unit, configured to obtain a target evaluation index of each candidate video frame, where the target evaluation index is used to indicate a quality degree of the corresponding candidate video frame; the second selecting unit is used for selecting at least one target video frame from the candidate video frames based on the target evaluation index of each candidate video frame, wherein the at least one target video frame is better than the candidate video frames except the at least one target video frame in the candidate video frames; a second generating unit for generating a video cover of the surveillance video based on the at least one target video frame, wherein the video cover includes information of target vehicles traveling through the road section.
According to another aspect of the embodiments of the present invention, there is also provided another video processing apparatus including: a sixth acquiring unit, configured to acquire a teaching video and extract a plurality of candidate video frames from the teaching video, where the plurality of candidate video frames include object information of different teaching objects, and the teaching objects include teachers and different types of teaching content; a seventh obtaining unit, configured to obtain a target evaluation index of each candidate video frame, where the target evaluation index is used to indicate a quality degree of the corresponding candidate video frame; the third selecting unit is used for selecting at least one target video frame from the candidate video frames based on the target evaluation index of each candidate video frame, wherein the target evaluation index of the at least one target video frame exceeds the target evaluation index of the candidate video frames except the at least one target video frame in the candidate video frames; a third generating unit configured to generate a video cover of the teaching video based on the at least one target video frame, wherein the video cover includes object information of the target teaching object.
According to another aspect of the embodiments of the present invention, there is also provided another video processing apparatus including: the eighth acquiring unit is used for acquiring the live broadcast video from the live broadcast platform and extracting a plurality of candidate video frames from the live broadcast video; a ninth obtaining unit, configured to obtain a target evaluation index of each candidate video frame, where the target evaluation index is used to indicate a quality degree of the corresponding candidate video frame; the fourth selecting unit is used for selecting at least one target video frame from the candidate video frames based on the target evaluation index of each candidate video frame, wherein the target evaluation index of the at least one target video frame exceeds the target evaluation index of the candidate video frames except the at least one target video frame in the candidate video frames; a fourth generation unit configured to generate a video cover of the live video based on the at least one target video frame; and issuing the video cover of the live video to a live platform for displaying.
According to another aspect of the embodiments of the present invention, there is also provided another video processing apparatus including: a tenth acquiring unit, configured to acquire a target video to be processed; the first uploading unit is used for uploading the target video to the server; the first receiving unit is used for receiving a video cover of a target video returned by the server, wherein the video cover is generated by the server based on at least one target video frame, the at least one target video frame is selected from a plurality of candidate video frames by the server based on a target evaluation index of each candidate video frame in the plurality of candidate video frames, the plurality of candidate video frames are extracted from the target video by the server, the target evaluation index is used for representing the quality degree of the corresponding candidate video frame, and the target evaluation index of the at least one target video frame exceeds the target evaluation index of the candidate video frames except the at least one target video frame in the plurality of candidate video frames.
According to another aspect of the embodiments of the present invention, there is also provided another video processing apparatus including: an eleventh acquiring unit, configured to acquire a target video to be processed by calling a first interface, and extract multiple candidate video frames from the target video, where the first interface includes a first parameter, and a parameter value of the first parameter is the target video; a twelfth acquiring unit, configured to acquire a target evaluation index of each candidate video frame, where the target evaluation index is used to indicate a quality degree of the corresponding candidate video frame; the fifth selecting unit is used for selecting at least one target video frame from the candidate video frames based on the target evaluation index of each candidate video frame, wherein the target evaluation index of the at least one target video frame exceeds the target evaluation index of the candidate video frames except the at least one target video frame in the candidate video frames; a fifth generating unit for generating a video cover of the target video based on the at least one target video frame; and the first output unit is used for outputting the video cover of the target video by calling a second interface, wherein the second interface comprises a second parameter, and the parameter value of the second parameter is the video cover of the target video.
In the embodiment of the invention, a target video to be processed can be obtained first, and a plurality of candidate video frames are extracted from the target video; acquiring a target evaluation index of each candidate video frame, wherein the target evaluation index is used for representing the quality degree of the corresponding candidate video frame; selecting at least one target video frame from the candidate video frames based on the target evaluation index of each candidate video frame, wherein the target evaluation index of the at least one target video frame exceeds the target evaluation index of the candidate video frames except the at least one target video frame in the candidate video frames; and generating a video cover of the target video based on at least one target video frame, thereby realizing the effect of improving the quality of the video cover.
It is easy to note that, for a target video to be processed, the quality degrees of a plurality of extracted candidate videos can be evaluated to obtain a target evaluation index of each candidate video frame, based on the target evaluation index of each candidate video frame, a high-quality target video frame can be obtained, a video cover of the target video is generated through the high-quality target video frame, and the quality of the video cover can be effectively ensured.
Therefore, the technical problem that the quality of the video cover is low in the related art is solved through the scheme provided by the application.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the invention and together with the description serve to explain the invention without limiting the invention. In the drawings:
fig. 1 is a block diagram of a hardware structure of a computer terminal (or mobile device) for implementing a video processing method according to an embodiment of the present application;
FIG. 2 is a flow chart of a video processing method according to an embodiment of the present application;
FIG. 3a is a flow chart of a video processing method according to an embodiment of the present application;
FIG. 3b is an input video image according to an embodiment of the present application;
FIG. 3c is an output sub-video cover according to an embodiment of the present application;
FIG. 4 is a flow diagram of another video processing method according to an embodiment of the present application;
FIG. 5 is a flow diagram of another video processing method according to an embodiment of the present application;
FIG. 6 is a flow diagram of another video processing method according to an embodiment of the present application;
FIG. 7 is a flow diagram of another video processing method according to an embodiment of the present application;
FIG. 8 is a flow diagram of another video processing method according to an embodiment of the present application;
FIG. 9 is a flow diagram of another video processing method according to an embodiment of the present application;
FIG. 10 is a schematic diagram of a video processing apparatus according to an embodiment of the present application;
FIG. 11 is a schematic diagram of another video processing apparatus according to an embodiment of the present application;
FIG. 12 is a schematic diagram of another video processing apparatus according to an embodiment of the present application;
FIG. 13 is a schematic diagram of another video processing apparatus according to an embodiment of the present application;
FIG. 14 is a schematic diagram of another video processing apparatus according to an embodiment of the present application;
FIG. 15 is a schematic diagram of another video processing apparatus according to an embodiment of the present application;
FIG. 16 is a schematic diagram of another video processing apparatus according to an embodiment of the present application;
fig. 17 is a block diagram of a computer terminal according to an embodiment of the present application.
Detailed Description
In order to make the technical solutions of the present invention better understood, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and claims of the present invention and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
First, some terms or terms appearing in the description of the embodiments of the present application are applicable to the following explanations:
the video cover is a static graph or a dynamic graph which is displayed to the user when the video is in an informal playing state and is used for prompting the video content and attracting the user to click;
and personalized distribution, namely, generating different recommended contents according to the portrait of the user by using different user algorithms.
Example 1
There is also provided, in accordance with an embodiment of the present invention, a video processing method embodiment, it should be noted that the steps illustrated in the flowchart of the accompanying drawings may be performed in a computer system such as a set of computer-executable instructions, and that, although a logical order is illustrated in the flowchart, in some cases, the steps illustrated or described may be performed in an order different than here.
The method provided by the first embodiment of the present application may be executed in a mobile terminal, a computer terminal, or a similar computing device. Fig. 1 shows a hardware configuration block diagram of a computer terminal (or mobile device) for implementing a video processing method. As shown in fig. 1, the computer terminal 10 (or mobile device 10) may include one or more (shown as 102a, 102b, … …, 102 n) processors 102 (the processors 102 may include, but are not limited to, a processing device such as a microprocessor MCU or a programmable logic device FPGA, etc.), a memory 104 for storing data, and a transmission module 106 for communication functions. Besides, the method can also comprise the following steps: a display, an input/output interface (I/O interface), a Universal Serial Bus (USB) port (which may be included as one of the ports of the I/O interface), a network interface, a power source, and/or a camera. It will be understood by those skilled in the art that the structure shown in fig. 1 is only an illustration and is not intended to limit the structure of the electronic device. For example, the computer terminal 10 may also include more or fewer components than shown in FIG. 1, or have a different configuration than shown in FIG. 1.
It should be noted that the one or more processors 102 and/or other data processing circuitry described above may be referred to generally herein as "data processing circuitry". The data processing circuitry may be embodied in whole or in part in software, hardware, firmware, or any combination thereof. Further, the data processing circuit may be a single stand-alone processing module, or incorporated in whole or in part into any of the other elements in the computer terminal 10 (or mobile device). As referred to in the embodiments of the application, the data processing circuit acts as a processor control (e.g. selection of a variable resistance termination path connected to the interface).
The memory 104 may be used to store software programs and modules of application software, such as program instructions/data storage devices corresponding to the video processing method in the embodiment of the present invention, and the processor 102 executes various functional applications and data processing by running the software programs and modules stored in the memory 104, that is, implementing the above-mentioned vulnerability detection method for application programs. The memory 104 may include high speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, the memory 104 may further include memory located remotely from the processor 102, which may be connected to the computer terminal 10 via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The transmission device 106 is used for receiving or transmitting data via a network. Specific examples of the network described above may include a wireless network provided by a communication provider of the computer terminal 10. In one example, the transmission device 106 includes a Network adapter (NIC) that can be connected to other Network devices through a base station to communicate with the internet. In one example, the transmission device 106 can be a Radio Frequency (RF) module, which is used to communicate with the internet in a wireless manner.
The display may be, for example, a touch screen type Liquid Crystal Display (LCD) that may enable a user to interact with a user interface of the computer terminal 10 (or mobile device).
It should be noted here that in some alternative embodiments, the computer device (or mobile device) shown in fig. 1 described above may include hardware elements (including circuitry), software elements (including computer code stored on a computer-readable medium), or a combination of both hardware and software elements. It should be noted that fig. 1 is only one example of a particular specific example and is intended to illustrate the types of components that may be present in the computer device (or mobile device) described above.
Under the above operating environment, the present application provides a video processing method as shown in fig. 2. Fig. 2 is a flowchart of a video processing method according to embodiment 1 of the present application. As shown in fig. 2, the method may include the steps of:
step S202, a target video to be processed is obtained, and a plurality of candidate video frames are extracted from the target video.
The target video can be a to-be-processed video uploaded by a video producer, and can also be a to-be-processed video to be recommended to a user for watching.
In an alternative embodiment, after uploading the target video, the video producer may process the target video to obtain a video cover of the target video to be distributed according to the processed target video. Different video covers may be displayed for different users as the user browses to the target video. The target video to be processed can be obtained before the user brushes the target video, the target video is processed, and the video cover of the target video to be displayed is obtained, so that when the user brushes the target video, the video cover of the target video can be displayed to the user to attract the user to click the target video. The target video can be directly intercepted, and a plurality of intercepted video frames are used as a plurality of candidate video frames.
In another alternative embodiment, the number of extracted candidate video frames and the positions of the extracted candidate video frames in the target video may be set in advance. For example, 10 extracted candidate video frames may be set in advance, and the position of the extracted candidate video frame in the target video is the position of the intermediate period.
In yet another alternative embodiment, the target video may be preprocessed, and a plurality of candidate video frames may be extracted from the preprocessed target video.
Step S204, acquiring a target evaluation index of each candidate video frame.
The target evaluation index is used for representing the quality degree of the corresponding candidate video frame.
The target evaluation index can be determined by processing the candidate video frame through a plurality of modules such as exposure detection, fuzzy detection, color value scoring, quality scoring, face angle, face number, face size, face position and the like.
In an alternative embodiment, the target evaluation index may be described by a score, wherein a higher score indicates a better quality of the candidate video frame, and a lower score indicates a worse quality of the candidate video frame. Alternatively, the scores of the respective modules in the target evaluation index may be added to obtain a total score of the target evaluation index, and the degree of goodness of the candidate video frame may be evaluated based on the total score. It should be noted that the total score of the target evaluation index may be determined by selecting the score obtained by any one or more of the modules according to the requirement of the user, which is not limited herein.
In another alternative embodiment, exposure detection may be performed on a candidate video frame, and if the exposure of the candidate video frame is too high or too low, the score obtained by exposure detection in the candidate video frame is lower, and if the exposure of the candidate video frame is in a normal range, the score obtained by exposure detection in the candidate video frame is higher. The fuzzy detection can be carried out on the candidate video frame, if the fuzzy degree of the candidate video frame is higher, the score obtained by the fuzzy detection in the candidate video frame is lower, and if the fuzzy degree of the candidate video frame is lower, the score obtained by the fuzzy detection in the candidate video frame is higher. And detecting the face position of the candidate video frame, wherein if the face in the candidate video frame is shielded, the score obtained by the face position in the candidate video is lower, and if the face in the post-selected video frame is completely displayed on the picture, the score obtained by the face position in the candidate video is higher.
In another alternative embodiment, the target evaluation index may also be represented by a good, or a bad, and the specific description of the target evaluation index is not limited herein. The target evaluation index can also be displayed as the scores of a plurality of modules such as exposure detection, fuzzy detection, color value scoring, quality scoring, face angle, face number, face size, face position and the like. By the target evaluation index, the candidate video frames can be scored, sorted and screened, the target video frame is obtained, and the quality of a video cover obtained by the target video frame is guaranteed.
Step S206, based on the target evaluation index of each candidate video frame, at least one target video frame is selected from the candidate video frames.
Wherein the target evaluation index of at least one target video frame exceeds the target evaluation index of the candidate video frames except for the at least one target video frame in the plurality of candidate video frames.
In an alternative embodiment, the target evaluation index may be represented by a specific value, and may be such that the higher the target evaluation index value (evaluation value), the better the candidate video frame is represented, and the lower the target evaluation index value, the worse the candidate video frame is represented, so that the target evaluation index of at least one target video frame exceeds the target evaluation index of the candidate video frames other than the at least one target video frame in the plurality of candidate video frames, and may be such that the target evaluation index value of at least one target video frame is higher than the target evaluation index value of the candidate video frames other than the at least one target video frame in the plurality of candidate video frames. The embodiment may further include that the higher the target evaluation index value is, the worse the candidate video frames are, the lower the target evaluation index value is, the better the candidate video frames are, so that the target evaluation index of the at least one target video frame exceeds the target evaluation index of the candidate video frames other than the at least one target video frame in the plurality of candidate video frames, and the target evaluation index value of the at least one target video frame may be lower than the target evaluation index value of the candidate video frames other than the at least one target video frame in the plurality of candidate video frames.
In another alternative embodiment, a more attractive video frame of the plurality of candidate video frames may be determined according to the target evaluation index of each candidate video frame, and the video frame may be determined as the target video frame. And determining target video frames aiming at different users according to the target evaluation index of each candidate video frame. For example, if the preference of the a user is that the color value of the person in the video is high, the video frame with the highest color value in the candidate video frames may be determined according to the score of the color value in the target evaluation index, so as to determine that the video frame is the target video frame, and the video cover of the target video may be determined based on the video frame, so as to satisfy the preference of the a user, thereby attracting the a user to click on the video cover of the video.
In another alternative embodiment, after a video producer uploads a target video, the target video may be processed to obtain a plurality of candidate video frames, and before a user brushes the target video, a user portrait of the user may be determined, and at least one target video frame may be selected from the plurality of candidate video frames according to a target evaluation index of the plurality of candidate video frames based on the user portrait of the user, so that a video cover generated according to the at least one target video frame may attract the user to click. The user representation can be a tagged user model abstracted according to information such as occupation, name, user preference, living habits, user behaviors and the like. Determining the user representation is in fact tagging the user, which is a highly refined signature analyzed for the user's attribute information. By tagging, the user may be described with some highly generalized, easily understood features to facilitate computer processing.
In step S208, a video cover of the target video is generated based on the at least one target video frame.
The video cover may be a dynamic video cover formed by combining a plurality of target video frames, wherein the plurality of target video frames in the dynamic video cover may be displayed at intervals of a preset time, and the preset time may be set by a user. The video cover may also be randomly selected from at least one target video frame. The video cover may be selected from at least one video frame based on a user representation of a user.
In an alternative embodiment, after a video producer uploads a target video, the target video may be processed, a plurality of candidate video frames are extracted, at least one target video frame is selected from the candidate video frames, when a user is about to brush the target video, a target video frame with the highest matching degree with a user portrait is determined from the selected at least one target video frame based on the user portrait of the user, and a video cover of the target video is generated based on the target video frame. Furthermore, a preset number of target video frames which are ranked in front of the matching degree of the user portrait can be determined from the at least one selected target video frame based on the user portrait of the user, and a dynamic video cover of the target video is generated based on the preset number of target video frames, wherein the preset number of target video frames in the dynamic video cover can be displayed at intervals of preset time.
Through the steps, the target video to be processed can be obtained first, and a plurality of candidate video frames are extracted from the target video; acquiring a target evaluation index of each candidate video frame, wherein the target evaluation index is used for representing the quality degree of the corresponding candidate video frame; selecting at least one target video frame from the candidate video frames based on the target evaluation index of each candidate video frame, wherein the target evaluation index of the at least one target video frame exceeds the target evaluation index of the candidate video frames except the at least one target video frame in the candidate video frames; and generating a video cover of the target video based on at least one target video frame, thereby realizing the effect of improving the quality of the video cover. It is easy to note that, for a target video to be processed, a plurality of extracted candidate videos may be evaluated to obtain a target evaluation index of each candidate video frame, and based on the target evaluation index of each candidate video frame, a high-quality target video frame may be obtained, and a video cover of the target video may be generated by the high-quality target video frame, so that the quality of the video cover may be effectively ensured. Therefore, the technical problem that the quality of the video cover is low in the related art is solved through the scheme provided by the application.
In the above embodiments of the present application, obtaining the quality assessment index of each candidate video frame includes: and detecting the video information of each candidate video frame to obtain the quality evaluation index of each candidate video frame.
The video information of the candidate video frame may be parameters describing the exposure of the candidate video frame, the blur of the candidate video frame, the matching degree of the candidate video frame and the video content, and the target object in the candidate video frame.
In an alternative embodiment, the quality assessment indicator of each candidate video frame may be obtained by detecting the video information of each candidate video frame and performing weighted summation on the video information in each candidate video frame.
In the above embodiments of the present application, detecting video information of each candidate video frame to obtain a quality evaluation index of each candidate video frame includes: detecting sub-video information in the video information through at least one detection module respectively to obtain at least one piece of sub-video information, wherein the at least one detection module corresponds to the at least one piece of sub-video information one to one, each detection module is established based on first requirement data, and the first requirement data is used for expressing the requirement for detecting each piece of sub-video information; a quality assessment indicator for each candidate video frame is determined based on the at least one sub-video information.
In the above embodiments of the present application, the sub-video information includes at least one of: exposure level of each candidate video frame; a degree of ambiguity for each candidate video frame; the degree to which the content of each candidate video frame matches the target content; parameters of the target object in each candidate video frame.
The at least one detection module may include an exposure detection module, configured to detect an exposure of each candidate video frame, so as to obtain an exposure of each candidate video frame. The at least one detection module may further include a blur degree detection module, configured to detect a blur degree of each candidate video frame, so as to obtain a blur degree of each candidate video frame. The at least one detection module further includes a target object detection module, configured to detect a target object in each candidate video frame, so as to obtain a parameter of the target object in each candidate video frame. The target object may be a face appearing in the candidate video frame, and the parameters of the target object may be an angle of the face, the number of faces, a size of the face, and a position of the face. The first requirement data may be obtained based on a detection requirement, for example, if it is required to detect the ambiguity of the candidate video frame, the first requirement data for detecting the ambiguity may be generated, and an ambiguity detection module may be established based on the first requirement data. If the exposure of the candidate video frame needs to be detected, first requirement data for detecting the exposure can be generated, and an exposure detection module is established based on the first requirement data. If the parameters of the target object of the candidate video frame need to be detected, first requirement data of the parameters of the target object can be generated, and a target object detection module is established based on the first requirement data. And each module is constructed, added and deleted according to actual requirements, so that the controllability of the cover algorithm is enhanced, and the output efficiency and the effect of the algorithm are ensured.
In another alternative embodiment, the quality assessment indicator for each candidate video frame may be determined based on one or more sub-video information. The one or more pieces of sub-video information can be selected according to the requirements of the user, and the detection module can be customized and optimized according to the requirements by utilizing the first requirement data to establish the detection module, so that the detection requirements are met.
In the above embodiment of the present application, the method further includes: and responding to the first adjusting instruction, and adjusting at least one target detection module in the at least one detection module.
The first adjustment instruction may be used to add the target detection module, delete the target detection module, and adjust the order of the target detection module. Wherein the first adjusting instruction comprises at least one of the following: adding instructions, deleting instructions, and adjusting instructions in sequence.
Wherein, in response to the instruction to add, at least one target detection module may be added to the at least one detection module; in response to the deletion instruction, the at least one target detection module may be deleted from the at least one detection module; in response to the order adjustment instruction, the order of at least one target detection module of the at least one detection module may be adjusted. By responding to the first adjusting instruction and adjusting at least one target detection module in at least one detection module, the module can be constructed in a personalized manner, so that the algorithm output is ensured to meet the real requirement of a user.
In the above embodiments of the present application, determining a quality assessment indicator of each candidate video frame based on at least one piece of sub-video information includes: and under the condition that the number of the at least one piece of sub-video information is multiple, carrying out weighted summation on the numerical value of each piece of sub-video information and the corresponding weight to obtain the quality evaluation index of each candidate video frame, wherein the weight is used for expressing the influence degree of each piece of sub-video information on the quality evaluation index of each candidate video frame.
The value of each sub-video information may be obtained by using a detection module corresponding to the sub-video information, and the weight of each sub-video information may be set in advance. The higher the weight of the sub-video information is, the higher the influence of the sub-video information on the quality evaluation index of the candidate video frame is; if the weight of the sub-video information is lower, it means that the sub-video information has a lower influence on the quality evaluation of the candidate video frame.
In an optional embodiment, the weight of each piece of sub-video information may be set according to the requirement of a user, the weight of important sub-video information may be set to be higher, the weight of unimportant sub-video information may be set to be lower, and then the numerical value of each piece of sub-video information and the corresponding weight are subjected to weighted summation, so that the quality evaluation index of the obtained candidate video frame can select a target video frame meeting the requirement of the user.
In the above embodiments of the present application, selecting at least one target video frame from a plurality of candidate video frames based on the target evaluation index of each candidate video frame includes: ranking the plurality of candidate video frames based on the quality assessment index of each candidate video frame; and selecting at least one target video frame from the sorted candidate video frames.
In an alternative embodiment, the plurality of candidate video frames may be ranked according to the quality evaluation index of each candidate video frame, and the scores of the quality evaluation indexes of the candidate video frames are from high to low, so as to select one or more target video frames ranked at the top from the ranked plurality of candidate video frames, so as to ensure the quality of the video cover of the generated target video.
In the above embodiments of the present application, generating a video cover of a target video based on at least one target video frame includes: and respectively editing at least one target video frame to obtain at least one sub-video cover of the video cover, wherein the at least one target video frame corresponds to the at least one sub-video cover one to one.
In an alternative embodiment, after obtaining the at least one target video frame, the obtained at least one target video frame may be edited, for example: beautifying a target video frame, erasing subtitles in the target video frame, cutting the proportion of the target video frame, cutting black edges in the target video frame, erasing station marks in the target video frame, dynamically synthesizing at least one target video frame and the like, thereby obtaining at least one sub-video cover in the video covers.
In the above embodiments of the present application, respectively editing at least one target video frame to obtain at least one sub-video cover of a video cover includes: and editing each target video frame through at least one editing module respectively to obtain at least one sub-video cover, wherein the at least one editing module corresponds to the at least one sub-video cover one to one, each editing module is established based on second requirement data, and the second requirement data is used for expressing the requirement of editing each target video frame. And each module is constructed, added and deleted according to actual requirements, so that the controllability of the cover algorithm is enhanced, and the output efficiency and the effect of the algorithm are ensured.
In the above embodiments of the present application, the editing at least one target video frame includes at least one of: respectively cutting at least one target video frame; respectively erasing target display information in at least one target video frame; respectively beautifying at least one target video frame; and respectively generating at least one target video frame into a dynamic image.
The editing module may include a cropping module for cropping each video frame to obtain at least one sub-video cover. The clipping of the video frame may be to clip the scale of the video frame, or to clip the black edge of the video frame. The editing module may further include an erasing module configured to erase the target display information of each video frame to obtain at least one sub-video cover. The target display information may be station captions, subtitles, and the like. The editing module may further include a beautification module configured to beautify each video frame to obtain at least one sub-video cover. The editing module may further include a dynamic module, configured to perform dynamic processing on each video frame to obtain at least one dynamic sub-video cover.
In the above embodiment of the present application, the method further includes: and responding to the second adjusting instruction, and adjusting at least one target editing module in the at least one editing module.
The second requirement data may be obtained based on display requirements, for example, if the cropped sub-video cover needs to be displayed, the second requirement data for displaying the cropped sub-video cover may be generated, and the cropping module may be established based on the second requirement data. And if the sub video cover of the erased target display information needs to be displayed, second requirement data of the sub video cover of the erased target display information can be generated, and an erasing module is established according to the second requirement data. And if the beautified sub-video cover needs to be displayed, second requirement data for displaying the beautified sub-video cover can be generated, and the beautification module is established according to the second requirement data. And if the dynamic sub video cover needs to be displayed, second requirement data for displaying the dynamic sub video cover can be generated, and a dynamic module is established according to the second requirement data.
In an alternative embodiment, the editing module established by the second requirement data can be customized and optimized according to requirements, so that the editing requirements are met.
The second adjustment instruction may be used to add the target editing module, delete the target editing module, and adjust the order of the target editing module. Wherein the second adjustment instruction includes at least one of: adding instructions, deleting instructions, and adjusting instructions in sequence.
Wherein, in response to the adding instruction, at least one target editing module can be added to the at least one editing module; at least one target editing module can be deleted in at least one editing module in response to the deletion instruction; in response to the order adjustment instruction, the order of at least one target editing module of the at least one editing module may be adjusted.
In an optional embodiment, the building module may be personalized by adjusting at least one target editing module of the at least one editing module in response to the second adjustment instruction, so as to ensure that the algorithm output meets the real requirements of the user.
In the above embodiment of the present application, after the at least one target video frame is edited to obtain at least one sub-video cover of the video cover, the method further includes: acquiring user portrait data; determining tag information that matches the user representation data; and distributing the sub video cover identified by the label information to a client corresponding to the user portrait data.
The user image data may be user preference data or the like. For example, the user preference data shows the favorite pets of the user, such as cats and dogs. The above tag information may be used to describe the video content of the target video, for example, if the video content of the target video is a cat or a dog, the corresponding tag information may be a pet tag. Illustratively, a favorite pet such as a cat or a dog is displayed in the user portrait data, and a pet tag is displayed in the tag information of the sub-video cover. Then the user portrait data is said to match the label information of the sub-video cover.
In an alternative embodiment, tag information may be determined that matches the user representation data and a sub-video cover identified by the tag information may be distributed to a client corresponding to the user representation data so that the user may display the sub-video cover while brushing the target video to attract the user to click on the sub-video cover. Aiming at the problems that the number of covers of a single video is limited in the related technology, the aesthetic fatigue of a user is easy to appear, labels and pictures are lacked in the video, and the personalized distribution of the user is difficult to realize, the video covers are distributed to at least one target video based on the user pictures, and the video covers can be distributed according to the personalization of the user, so that the user is attracted to click.
In the above embodiment of the present application, before extracting a plurality of candidate video frames from a target video, the method further includes: preprocessing a target video; extracting a plurality of candidate video frames from a target video, including: and extracting a plurality of candidate video frames from the preprocessed target video.
The preprocessing comprises video screenshot, face detection, image grading, shot segmentation, image clustering and the like.
In an optional embodiment, after video screenshot is performed on a target video, a plurality of video frames can be obtained, then face detection is performed on the plurality of video frames to obtain video frames containing faces, image scoring is performed on the video frames containing the faces, a plurality of video frames with higher scores are selected, then shot segmentation is performed on the plurality of video frames with higher scores to obtain a plurality of segmented video frames, finally image clustering is performed on the plurality of video frames, the video frames belonging to the same category are clustered into one group, and one video frame is selected from each group as a candidate video frame, so that similar covers are avoided in the candidate video frames, the number of the candidate video frames is greatly reduced, and the efficiency of selecting the target video frames is improved.
In the above embodiment of the present application, the target video is preprocessed, and the target video is preprocessed by at least one preprocessing module respectively to obtain at least one preprocessing result, where the at least one preprocessing module corresponds to the at least one preprocessing result one to one, each preprocessing module is established based on third requirement data, and the third requirement data is used to indicate a requirement for preprocessing the target video.
The preprocessing module may include a screenshot module configured to perform video screenshot on a target video, so as to obtain a preprocessing result of a plurality of video frames. The preprocessing module further comprises a face detection module, which is used for detecting faces in a plurality of video frames obtained by screenshot to obtain a plurality of video frames containing the faces. The preprocessing module further comprises a scoring module for scoring the plurality of video frames to obtain a plurality of video frames with higher scores. The preprocessing module further comprises a shot segmentation module, which is used for segmenting shots in the plurality of video frames to obtain a plurality of segmented video frames. The preprocessing module further comprises an image clustering module, which is used for clustering images in the plurality of video frames to obtain a plurality of clustered video frames.
The third demand data may be obtained based on a requirement for preprocessing, for example, if a video screenshot of the target video needs to be preprocessed, the third demand data of the video screenshot may be generated, and a screenshot module is established based on the third demand data; if the target video needs to be preprocessed for face detection, third requirement data for face detection can be generated, and a face detection module is established based on the third requirement data; if the target video needs to be subjected to grading preprocessing, third grading demand data can be generated, and a grading module is established based on the third demand data; the target video is required to be preprocessed by the lens segmentation module, third requirement data of lens segmentation can be generated, and the lens segmentation module is established based on the third requirement data; and if the target video needs to be preprocessed by image clustering, third requirement data of the image clustering can be generated, and an image clustering module is established based on the third requirement data. And each module is constructed, added and deleted according to actual requirements, so that the controllability of the cover algorithm is enhanced, and the output efficiency and the effect of the algorithm are ensured.
In an alternative embodiment, the pre-processing module established by the third requirement data can be customized and optimized according to requirements, so that the pre-processing requirements are met.
In the above embodiment of the present application, the method further includes: and responding to the third adjusting instruction, and adjusting at least one target preprocessing module in the at least one preprocessing module.
The third adjustment instruction may be used to add the target preprocessing module, delete the target preprocessing module, and adjust the order of the target preprocessing module. Wherein the third adjustment instruction includes at least one of: adding instructions, deleting instructions, and adjusting instructions in sequence.
Wherein, in response to the adding instruction, at least one target preprocessing module can be added to the at least one preprocessing module; in response to the deletion instruction, the at least one target preprocessing module can be deleted from the at least one preprocessing module; in response to the order adjustment instruction, the order of at least one target pre-processing module of the at least one pre-processing module may be adjusted.
In an optional embodiment, the building module may be personalized by adjusting at least one target preprocessing module in the at least one preprocessing module in response to the third adjustment instruction, so as to ensure that the algorithm output meets the real requirements of the user.
In the above embodiments of the present application, the preprocessing the target video includes at least one of: performing screenshot processing on a target video; detecting a target object in a target video; scoring an image of a target video; carrying out segmentation processing on a target video; and clustering the target video.
In an alternative embodiment, the screenshot process of the target video may result in a plurality of video frames. And detecting the target object in the target video frame to obtain the video frame containing the target object. And scoring the image of the target video to obtain a video frame with a higher scoring number. And carrying out segmentation processing on the target video to obtain a video frame containing a high-quality picture. The target video is clustered, and one video frame can be selected from the video frames in each clustered category to serve as a candidate video frame, so that the repetition rate of a plurality of candidate video frames is reduced, and the operation resources are reduced.
A preferred embodiment of the present application will be described in detail with reference to fig. 3a to 3c, where the method may be executed by a mobile terminal or a server, and in the embodiment of the present application, the method is executed by the server as an example.
Fig. 3a shows a flow chart of a video processing method, which comprises the following steps:
step S301, inputting a video;
fig. 3b shows an input video image.
Step S302, preprocessing an input video to obtain a plurality of candidate video frames;
the preprocessing comprises image clustering, shot segmentation, image scoring, face detection, video screenshot and the like.
The steps provide data preparation for the subsequent steps, and the occurrence of the similar cover is avoided in a clustering mode, so that the number of cover candidate frames is greatly reduced, and key lens information is not lost.
Step S303, performing quality evaluation on a plurality of candidate video frames;
the quality evaluation can be carried out on multiple aspects of exposure detection, fuzzy detection, color value scoring, quality scoring, face angles, face quantity, face size and face positions to obtain a target video frame, and candidate frames can be further screened and sorted through the quality evaluation, so that preferred selection can be carried out.
Step S304, post-processing the target video frame to obtain at least one sub-video cover;
the post-processing may include dynamic image generation, comprehensive scoring, expression recognition, image beautification, station caption erasure, subtitle erasure, scale clipping, black edge clipping, and the like.
Step S305, at least one sub-video cover is distributed to the client of the user in a personalized mode according to the user portrait.
Fig. 3c shows the output sub-video cover.
Through the steps, when the user brushes the target video, the sub video cover can be displayed so as to attract the user to click.
Example 2
There is also provided, in accordance with an embodiment of the present application, a video processing method embodiment, it should be noted that the steps illustrated in the flowchart of the accompanying drawings may be performed in a computer system such as a set of computer-executable instructions, and that, although a logical order is illustrated in the flowchart, in some cases, the steps illustrated or described may be performed in an order different than here.
Fig. 4 is a flow chart of a video processing method according to an embodiment of the present application. As shown in fig. 4, the method may include the steps of:
and step S402, responding to an image input instruction acting on the operation interface, and acquiring a target video to be processed.
The operation interface may be an operation interface of a terminal device such as a computer.
In an alternative embodiment, a user may click a preset control in the operation interface to enable the control to generate an image input instruction, and the user may input the target video in the generation of the image input instruction.
And S404, responding to a cover generation instruction acted on the operation interface, and displaying a video cover of the target video on the operation interface.
The video cover is generated by at least one target video frame, the at least one target video frame is selected from a plurality of candidate video frames based on a target evaluation index of each candidate video frame in the candidate video frames, the candidate video frames are extracted from the target video, the target evaluation index is used for representing the quality degree of the corresponding candidate video frame, and the target evaluation index of the at least one target video frame exceeds the target evaluation index of the candidate video frames except the at least one target video frame in the candidate video frames.
It should be noted that the preferred embodiments described in the above examples of the present application are the same as the schemes, application scenarios, and implementation procedures provided in example 1, but are not limited to the schemes provided in example 1.
Example 3
There is also provided, in accordance with an embodiment of the present application, a video processing method embodiment, it should be noted that the steps illustrated in the flowchart of the accompanying drawings may be performed in a computer system such as a set of computer-executable instructions, and that, although a logical order is illustrated in the flowchart, in some cases, the steps illustrated or described may be performed in an order different than here.
Fig. 5 is a flow chart of a video processing method according to an embodiment of the present application. As shown in fig. 5, the method may include the steps of:
step S502, acquiring a monitoring video of a road section, and extracting a plurality of candidate video frames from the monitoring video.
Wherein the plurality of candidate video frames include information of vehicles traveling through the road segment.
The monitoring videos of the road sections may be obtained from a preset monitoring video database, where the preset monitoring video database may include monitoring videos of a plurality of road sections.
Step S504, the target evaluation index of each candidate video frame is obtained.
The target evaluation index is used for representing the quality degree of the corresponding candidate video frame.
Step S506, selecting at least one target video frame from the plurality of candidate video frames based on the target evaluation index of each candidate video frame.
Wherein the target evaluation index of at least one target video frame exceeds the target evaluation index of the candidate video frames except for the at least one target video frame in the plurality of candidate video frames.
Step S508, generating a video cover of the surveillance video based on the at least one target video frame.
Wherein the video cover includes information of the target vehicle traveling through the road segment.
It should be noted that the preferred embodiments mentioned in the above examples of the present application are the same as the schemes and implementation procedures provided in example 1, but are not limited to the schemes and implementation procedures provided in example 1.
Example 4
There is also provided, in accordance with an embodiment of the present application, a video processing method embodiment, it should be noted that the steps illustrated in the flowchart of the accompanying drawings may be performed in a computer system such as a set of computer-executable instructions, and that, although a logical order is illustrated in the flowchart, in some cases, the steps illustrated or described may be performed in an order different than here.
Fig. 6 is a flow chart of a video processing method according to an embodiment of the present application. As shown in fig. 6, the method may include the steps of:
step S602, a teaching video is obtained, and a plurality of candidate video frames are extracted from the teaching video.
The plurality of candidate video frames comprise object information of different teaching objects, and the teaching objects comprise teachers and teaching contents of different types.
The teaching video can be obtained from a preset teaching video database, wherein the preset teaching video database can store teaching videos of a plurality of teachers and can also store various teaching videos.
Step S604, a target evaluation index of each candidate video frame is obtained.
The target evaluation index is used for representing the quality degree of the corresponding candidate video frame.
Step S606, selecting at least one target video frame from the plurality of candidate video frames based on the target evaluation index of each candidate video frame.
Wherein the target evaluation index of at least one target video frame exceeds the target evaluation index of the candidate video frames except for the at least one target video frame in the plurality of candidate video frames.
Step S608, a video cover of the teaching video is generated based on the at least one target video frame.
Wherein the video cover includes object information of the target teaching object.
It should be noted that the preferred embodiments mentioned in the above examples of the present application are the same as the schemes and implementation procedures provided in example 1, but are not limited to the schemes and implementation procedures provided in example 1.
Example 5
There is also provided, in accordance with an embodiment of the present application, a video processing method embodiment, it should be noted that the steps illustrated in the flowchart of the accompanying drawings may be performed in a computer system such as a set of computer-executable instructions, and that, although a logical order is illustrated in the flowchart, in some cases, the steps illustrated or described may be performed in an order different than here.
Fig. 7 is a flow chart of a video processing method according to an embodiment of the present application. As shown in fig. 7, the method may include the steps of:
step S702, acquiring a live broadcast video from a live broadcast platform, and extracting a plurality of candidate video frames from the live broadcast video.
The live video can be a video which is live by the live platform, and can also be a video which is stored by the live platform and is recorded before.
Step S704, a target evaluation index of each candidate video frame is obtained.
The target evaluation index is used for representing the quality degree of the corresponding candidate video frame.
Step S706, selecting at least one target video frame from the plurality of candidate video frames based on the target evaluation index of each candidate video frame.
Wherein the target evaluation index of at least one target video frame exceeds the target evaluation index of the candidate video frames except for the at least one target video frame in the plurality of candidate video frames.
Step S708, a video cover of the live video is generated based on the at least one target video frame.
And step S710, issuing a video cover of the live video to a live platform for displaying.
It should be noted that the preferred embodiments described in the above examples of the present application are the same as the schemes, application scenarios, and implementation procedures provided in example 1, but are not limited to the schemes provided in example 1.
Example 6
There is also provided, in accordance with an embodiment of the present application, a video processing method embodiment, it should be noted that the steps illustrated in the flowchart of the accompanying drawings may be performed in a computer system such as a set of computer-executable instructions, and that, although a logical order is illustrated in the flowchart, in some cases, the steps illustrated or described may be performed in an order different than here.
Fig. 8 is a flow chart of a video processing method according to an embodiment of the present application. As shown in fig. 8, the method may include the steps of:
step S802, the client acquires a target video to be processed.
Step S804, the client uploads the target video to the server.
The server may be a cloud server.
In an optional embodiment, in order to better process the target video, the acquired target video may be transmitted to a corresponding processing device for processing, for example, directly transmitted to a computer terminal (e.g., a laptop, a personal computer, etc.) of the user for processing, or transmitted to a cloud server through the computer terminal of the user for processing. It should be noted that, since the target video requires a large amount of computing resources, in the embodiment of the present application, a processing device is taken as an example of a cloud server.
For example, in order to facilitate the user to upload the target video, an interactive interface may be provided for the user, and the user may acquire the target video to be uploaded by clicking the "select image" control, and then upload the target video to the cloud server by clicking the "upload" control. In addition, in order to facilitate the user to confirm whether the target video uploaded to the cloud server is the required target video, the selected target video can be displayed in the image display area, and after the user confirms that no error exists, the user can upload the target video by clicking the upload control.
In step S806, the client receives the video cover of the target video returned by the server.
The video cover is generated by the server based on at least one target video frame, the at least one target video frame is selected from a plurality of candidate video frames by the server based on a target evaluation index of each candidate video frame in the candidate video frames, the candidate video frames are extracted from the target video by the server, the target evaluation index is used for representing the quality degree of the corresponding candidate video frame, and the target evaluation index of the at least one target video frame exceeds the target evaluation index of the candidate video frames except the at least one target video frame in the candidate video frames.
It should be noted that the preferred embodiments described in the above examples of the present application are the same as the schemes, application scenarios, and implementation procedures provided in example 1, but are not limited to the schemes provided in example 1.
Example 7
There is also provided, in accordance with an embodiment of the present application, a video processing method embodiment, it should be noted that the steps illustrated in the flowchart of the accompanying drawings may be performed in a computer system such as a set of computer-executable instructions, and that, although a logical order is illustrated in the flowchart, in some cases, the steps illustrated or described may be performed in an order different than here.
Fig. 9 is a flow chart of a video processing method according to an embodiment of the present application. As shown in fig. 9, the method may include the steps of:
step S902, obtaining a target video to be processed by calling a first interface, and extracting a plurality of candidate video frames from the target video.
The first interface comprises a first parameter, and the parameter value of the first parameter is the target video.
The first interface in the above steps may be an interface for performing data interaction between the cloud server and the client, and the client may transmit the target video and the first tag corresponding to the target video into the interface function, which are respectively used as two parameters of the interface function, so as to achieve the purpose of uploading the target video to the cloud server.
Step S904, a target evaluation index of each candidate video frame is obtained.
The target evaluation index is used for representing the quality degree of the corresponding candidate video frame.
Step S906, selecting at least one target video frame from the plurality of candidate video frames based on the target evaluation index of each candidate video frame.
Wherein the target evaluation index of at least one target video frame exceeds the target evaluation index of the candidate video frames except for the at least one target video frame in the plurality of candidate video frames.
Step S908 generates a video cover of the target video based on the at least one target video frame.
In step S910, a video cover of the target video is output by calling the second interface.
The second interface comprises a second parameter, and the parameter value of the second parameter is the video cover of the target video.
The second interface in the above steps may be an interface for performing data interaction between the cloud server and the client, and the cloud server may transmit the generated result to the interface function, and the generated result is respectively used as a parameter of the interface function, so as to achieve the purpose of issuing the identification result to the client.
It should be noted that the preferred embodiments mentioned in the above examples of the present application are the same as the schemes and implementation procedures provided in example 1, but are not limited to the schemes and implementation procedures provided in example 1.
Example 8
According to an embodiment of the present application, there is also provided a video processing apparatus for implementing the video processing method, as shown in fig. 10, the apparatus 1000 includes: a first acquisition unit 1002, a second acquisition unit 1004, a first selection unit 1006, and a first generation unit 1008.
The first acquisition unit is used for acquiring a target video to be processed and extracting a plurality of candidate video frames from the target video; the second acquisition unit is used for acquiring a target evaluation index of each candidate video frame, wherein the target evaluation index is used for expressing the quality degree of the corresponding candidate video frame; the first selecting unit is used for selecting at least one target video frame from the candidate video frames based on the target evaluation index of each candidate video frame, wherein the target evaluation index of the at least one target video frame exceeds the target evaluation index of the candidate video frames except the at least one target video frame in the candidate video frames; the first generating unit is used for generating a video cover of the target video based on at least one target video frame.
It should be noted here that the first acquiring unit 1002, the second acquiring unit 1004, the first selecting unit 1006, and the first generating unit 1008 correspond to steps S202 to S208 in embodiment 1, and the four units are the same as the corresponding steps in the implementation example and the application scenario, but are not limited to the disclosure in embodiment 1. It should be noted that the above modules may be operated in the computer terminal 10 provided in embodiment 1 as a part of the apparatus.
In the above embodiments of the present application, the second obtaining unit includes: a first detection module.
The first detection module is used for detecting the video information of each candidate video frame to obtain the quality evaluation index of each candidate video frame.
In the above embodiments of the present application, the first detecting module includes: the device comprises a first obtaining submodule and a first determining submodule.
The first obtaining submodule is used for respectively detecting sub-video information in the video information through at least one detection module to obtain at least one piece of sub-video information, wherein the at least one detection module corresponds to the at least one piece of sub-video information one to one, each detection module is established based on first requirement data, and the first requirement data is used for representing the requirement for detecting each piece of sub-video information; the first determining sub-module is for determining a quality assessment indicator for each candidate video frame based on the at least one sub-video information.
In the above embodiments of the present application, the apparatus should include: a first response unit.
The first response unit is used for responding to the first adjusting instruction and adjusting at least one target detection module in at least one detection module.
In the above embodiments of the present application, the first determining sub-module includes: a first weighting submodule.
The first weighting submodule is used for weighting and summing the numerical value of each piece of sub-video information and the corresponding weight under the condition that the number of at least one piece of sub-video information is multiple to obtain the quality evaluation index of each candidate video frame, wherein the weight is used for expressing the influence degree of each piece of sub-video information on the quality evaluation index of each candidate video frame.
In the above embodiments of the present application, the sub-video information includes at least one of: exposure level of each candidate video frame; a degree of ambiguity for each candidate video frame; the degree to which the content of each candidate video frame matches the target content; parameters of the target object in each candidate video frame.
In the above embodiments of the present application, the first selecting unit includes: the device comprises a first sequencing module and a first selecting module.
The first ordering module is used for ordering a plurality of candidate video frames based on the quality evaluation index of each candidate video frame; the first selection module is used for selecting at least one target video frame from the sorted candidate video frames.
In the above embodiment of the present application, the first generating unit includes: a first editing module.
The first editing module is used for editing at least one target video frame respectively to obtain at least one sub-video cover of the video cover, wherein the at least one target video frame corresponds to the at least one sub-video cover one to one.
In the above embodiment of the present application, the first editing module includes: and a first editing sub-module.
The first editing submodule is used for editing each target video frame through at least one editing module respectively to obtain at least one sub-video cover, wherein the at least one editing module corresponds to the at least one sub-video cover one to one, each editing module is established based on second requirement data, and the second requirement data is used for representing the requirement for editing each target video frame.
In the above embodiment of the present application, the apparatus further includes: a second response unit.
The second response unit is used for responding to the second adjusting instruction and adjusting at least one target editing module in at least one editing module.
In the above embodiments of the present application, the first editing module includes at least one of: cutting the sub-modules, erasing the sub-modules, beautifying the sub-modules and generating the sub-modules.
The cutting submodule is used for respectively cutting at least one target video frame; the erasing submodule is used for respectively erasing the target display information in at least one target video frame; the beautification submodule is used for respectively beautifying at least one target video frame; the generation submodule is used for respectively generating a dynamic image from at least one target video frame.
In the above embodiment of the present application, the apparatus further includes: the system comprises a thirteenth acquisition unit, a first determination unit and a first distribution unit.
The thirteenth acquisition unit is used for acquiring user portrait data; the first determining unit is used for determining label information matched with the user portrait data; the first distribution unit is configured to distribute the sub-video cover identified by the tag information to a client corresponding to the user representation data.
In the above embodiment of the present application, the apparatus further includes: a preprocessing unit and an extraction unit.
The preprocessing unit is used for preprocessing a target video; the extraction unit is used for extracting a plurality of candidate video frames from the target video, and comprises the following steps: and extracting a plurality of candidate video frames from the preprocessed target video.
In the above embodiments of the present application, the preprocessing unit includes: and a processing module.
The processing module is used for preprocessing the target video through at least one preprocessing module respectively to obtain at least one preprocessing result, wherein the at least one preprocessing module corresponds to the at least one preprocessing result one to one, each preprocessing module is established based on third requirement data, and the third requirement data is used for representing the requirement for preprocessing the target video.
In the above embodiment of the present application, the apparatus further includes: a second response unit.
The second response unit is used for responding to the third adjusting instruction and adjusting at least one target preprocessing module in at least one preprocessing module.
In the above embodiments of the present application, the preprocessing unit includes at least one of: the system comprises a screenshot processing module, a second detection module, a grading module, a segmentation processing module and a clustering processing module.
The screenshot processing module is used for performing screenshot processing on the target video; the second detection module is used for detecting a target object in the target video; the scoring module is used for scoring the image of the target video; the segmentation processing module is used for carrying out segmentation processing on the target video; the clustering processing module is used for clustering the target video.
Example 9
According to an embodiment of the present application, there is also provided a video processing apparatus for implementing the video processing method, as shown in fig. 11, the apparatus 1100 includes: a third acquisition unit 1102, a first display unit 1104.
The third acquisition unit is used for responding to an image input instruction acting on the operation interface and acquiring a target video to be processed; the first display unit is used for responding to a cover generation instruction acting on the operation interface and displaying a video cover of a target video on the operation interface, wherein the video cover is generated by at least one target video frame, the at least one target video frame is a target evaluation index based on each candidate video frame in a plurality of candidate video frames, the target evaluation index is selected from the candidate video frames, the candidate video frames are extracted from the target video, the target evaluation index is used for representing the quality degree of the corresponding candidate video frame, and the target evaluation index of the at least one target video frame exceeds the target evaluation index of the candidate video frames except the at least one target video frame in the candidate video frames.
It should be noted here that the third acquiring unit 1102 and the first displaying unit 1104 correspond to steps S402 to S404 in embodiment 2, and the two units are the same as the corresponding steps in the implementation example and the application scenario, but are not limited to the disclosure in embodiment 1. It should be noted that the above modules may be operated in the computer terminal 10 provided in embodiment 1 as a part of the apparatus.
It should be noted that the preferred embodiments described in the above examples of the present application are the same as the schemes, application scenarios, and implementation procedures provided in example 1, but are not limited to the schemes provided in example 1.
Example 10
According to an embodiment of the present application, there is also provided a video processing apparatus for implementing the above-described video processing method, as shown in fig. 12, the apparatus 1200 includes: a fourth obtaining unit 1202, a fifth obtaining unit 1204, a second selecting unit 1206, and a second generating unit 1208.
The fourth acquisition unit is used for acquiring a monitoring video of the road section and extracting a plurality of candidate video frames from the monitoring video, wherein the candidate video frames comprise information of vehicles driving through the road section; a fifth obtaining unit, configured to obtain a target evaluation index of each candidate video frame, where the target evaluation index is used to indicate a quality degree of the corresponding candidate video frame; the second selecting unit is used for selecting at least one target video frame from the candidate video frames based on the target evaluation index of each candidate video frame, wherein the target evaluation index of the at least one target video frame exceeds the target evaluation index of the candidate video frames except the at least one target video frame in the candidate video frames; a second generating unit for generating a video cover of the surveillance video based on the at least one target video frame, wherein the video cover includes information of target vehicles traveling through the road section.
It should be noted here that the fourth obtaining unit 1202, the fifth obtaining unit 1204, the second selecting unit 1206, and the second generating unit 1208 correspond to steps S502 to S508 in embodiment 3, and the four units are the same as the corresponding steps in the implementation example and the application scenario, but are not limited to the disclosure in embodiment 1. It should be noted that the above modules may be operated in the computer terminal 10 provided in embodiment 1 as a part of the apparatus.
It should be noted that the preferred embodiments described in the above examples of the present application are the same as the schemes, application scenarios, and implementation procedures provided in example 1, but are not limited to the schemes provided in example 1.
Example 11
According to an embodiment of the present application, there is also provided a video processing apparatus for implementing the above-mentioned video processing method, as shown in fig. 13, the apparatus 1300 includes: a sixth obtaining unit 1302, a seventh obtaining unit 1304, a third selecting unit 1306, and a third generating unit 1308.
The sixth acquiring unit is used for acquiring a teaching video and extracting a plurality of candidate video frames from the teaching video, wherein the plurality of candidate video frames comprise object information of different teaching objects, and the teaching objects comprise teachers and teaching contents of different types; a seventh obtaining unit, configured to obtain a target evaluation index of each candidate video frame, where the target evaluation index is used to indicate a quality degree of the corresponding candidate video frame; the third selecting unit is used for selecting at least one target video frame from the candidate video frames based on the target evaluation index of each candidate video frame, wherein the target evaluation index of the at least one target video frame exceeds the target evaluation index of the candidate video frames except the at least one target video frame in the candidate video frames; a third generating unit configured to generate a video cover of the teaching video based on the at least one target video frame, wherein the video cover includes object information of the target teaching object.
It should be noted here that the sixth obtaining unit 1302, the seventh obtaining unit 1304, the third selecting unit 1306, and the third generating unit 1308 correspond to steps S602 to S608 in embodiment 4, and the four units are the same as the corresponding steps in the implementation example and the application scenario, but are not limited to the disclosure in embodiment 1. It should be noted that the above modules may be operated in the computer terminal 10 provided in embodiment 1 as a part of the apparatus.
It should be noted that the preferred embodiments described in the above examples of the present application are the same as the schemes, application scenarios, and implementation procedures provided in example 1, but are not limited to the schemes provided in example 1.
Example 12
According to an embodiment of the present application, there is also provided a video processing apparatus for implementing the video processing method, as shown in fig. 14, the apparatus 1400 includes: an eighth obtaining unit 1402, a ninth obtaining unit 1404, a fourth selecting unit 1406, and a fourth generating unit 1408.
The eighth acquiring unit is used for acquiring a live broadcast video from the live broadcast platform and extracting a plurality of candidate video frames from the live broadcast video; a ninth obtaining unit, configured to obtain a target evaluation index of each candidate video frame, where the target evaluation index is used to indicate a quality degree of the corresponding candidate video frame; the fourth selecting unit is used for selecting at least one target video frame from the candidate video frames based on the target evaluation index of each candidate video frame, wherein the target evaluation index of the at least one target video frame exceeds the target evaluation index of the candidate video frames except the at least one target video frame in the candidate video frames; a fourth generation unit configured to generate a video cover of the live video based on the at least one target video frame; and issuing the video cover of the live video to a live platform for displaying.
It should be noted here that the eighth obtaining unit 1402, the ninth obtaining unit 1404, the fourth selecting unit 1406, and the fourth generating unit 1408 correspond to steps S702 to S708 in embodiment 5, and the four units are the same as the corresponding steps in the implementation example and the application scenario, but are not limited to the disclosure in embodiment 1. It should be noted that the above modules may be operated in the computer terminal 10 provided in embodiment 1 as a part of the apparatus.
It should be noted that the preferred embodiments described in the above examples of the present application are the same as the schemes, application scenarios, and implementation procedures provided in example 1, but are not limited to the schemes provided in example 1.
Example 13
According to an embodiment of the present application, there is also provided a video processing apparatus for implementing the video processing method, as shown in fig. 15, the apparatus 1500 includes: tenth acquisition unit 1502, first upload unit 1504, and first receive unit 1506.
The tenth acquiring unit is used for acquiring a target video to be processed; the first uploading unit is used for uploading the target video to the server; the first receiving unit is used for receiving a video cover of a target video returned by the server, wherein the video cover is generated by the server based on at least one target video frame, the at least one target video frame is selected from a plurality of candidate video frames by the server based on a target evaluation index of each candidate video frame in the plurality of candidate video frames, the plurality of candidate video frames are extracted from the target video by the server, the target evaluation index is used for representing the quality degree of the corresponding candidate video frame, and the target evaluation index of the at least one target video frame exceeds the target evaluation index of the candidate video frames except the at least one target video frame in the plurality of candidate video frames.
It should be noted here that the tenth acquiring unit 1502, the first uploading unit 1504, and the first receiving unit 1506 correspond to steps S802 to S806 in embodiment 6, and the three units are the same as the corresponding steps in the implementation example and application scenario, but are not limited to the disclosure in embodiment 1. It should be noted that the above modules may be operated in the computer terminal 10 provided in embodiment 1 as a part of the apparatus.
It should be noted that the preferred embodiments described in the above examples of the present application are the same as the schemes, application scenarios, and implementation procedures provided in example 1, but are not limited to the schemes provided in example 1.
Example 14
According to an embodiment of the present application, there is also provided a video processing apparatus for implementing the video processing method, as shown in fig. 16, the apparatus 1600 includes: an eleventh obtaining unit 1602, a twelfth obtaining unit 1604, a fifth selecting unit 1606, a fifth generating unit 1608, and a first output unit 1610.
The eleventh acquiring unit is configured to acquire a target video to be processed by calling a first interface, and extract a plurality of candidate video frames from the target video, where the first interface includes a first parameter, and a parameter value of the first parameter is the target video; a twelfth acquiring unit, configured to acquire a target evaluation index of each candidate video frame, where the target evaluation index is used to indicate a quality degree of the corresponding candidate video frame; the fifth selecting unit is used for selecting at least one target video frame from the candidate video frames based on the target evaluation index of each candidate video frame, wherein the target evaluation index of the at least one target video frame exceeds the target evaluation index of the candidate video frames except the at least one target video frame in the candidate video frames; a fifth generating unit for generating a video cover of the target video based on the at least one target video frame; and the first output unit is used for outputting the video cover of the target video by calling a second interface, wherein the second interface comprises a second parameter, and the parameter value of the second parameter is the video cover of the target video.
It should be noted here that the eleventh obtaining unit 1602, the twelfth obtaining unit 1604, the fifth selecting unit 1606, the fifth generating unit 1608, and the first output unit 1610 correspond to steps S902 to S910 in embodiment 7, and the five units are the same as the examples and application scenarios realized by the corresponding steps, but are not limited to the contents disclosed in embodiment 1. It should be noted that the above modules may be operated in the computer terminal 10 provided in embodiment 1 as a part of the apparatus.
It should be noted that the preferred embodiments described in the above examples of the present application are the same as the schemes, application scenarios, and implementation procedures provided in example 1, but are not limited to the schemes provided in example 1.
Example 15
According to an embodiment of the present application, there is also provided a video processing system including:
a processor;
a memory coupled to the processor for providing instructions to the processor for processing the following processing steps: acquiring a target video to be processed, and extracting a plurality of candidate video frames from the target video; acquiring a target evaluation index of each candidate video frame, wherein the target evaluation index is used for representing the quality degree of the corresponding candidate video frame; selecting at least one target video frame from the candidate video frames based on the target evaluation index of each candidate video frame, wherein the target evaluation index of the at least one target video frame exceeds the target evaluation index of the candidate video frames except the at least one target video frame in the candidate video frames; a video cover for the target video is generated based on the at least one target video frame.
Example 16
The embodiment of the application can provide a computer terminal, and the computer terminal can be any one computer terminal device in a computer terminal group. Optionally, in this embodiment, the computer terminal may also be replaced with a terminal device such as a mobile terminal.
Optionally, in this embodiment, the computer terminal may be located in at least one network device of a plurality of network devices of a computer network.
In this embodiment, the computer terminal may execute program codes of the following steps in the video processing method: acquiring a target video to be processed, and extracting a plurality of candidate video frames from the target video; acquiring a target evaluation index of each candidate video frame, wherein the target evaluation index is used for representing the quality degree of the corresponding candidate video frame; selecting at least one target video frame from the candidate video frames based on the target evaluation index of each candidate video frame, wherein the target evaluation index of the at least one target video frame exceeds the target evaluation index of the candidate video frames except the at least one target video frame in the candidate video frames; a video cover for the target video is generated based on the at least one target video frame.
Alternatively, fig. 17 is a block diagram of a computer terminal according to an embodiment of the present application. As shown in fig. 17, the computer terminal a may include: one or more processors 1702 (only one of which is shown), and a memory 1704.
The memory may be configured to store software programs and modules, such as program instructions/modules corresponding to the video processing method and apparatus in the embodiments of the present application, and the processor executes various functional applications and data processing by running the software programs and modules stored in the memory, so as to implement the video processing method. The memory may include high speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, the memory may further include memory located remotely from the processor, which may be connected to the computer terminal a via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The processor can call the information and application program stored in the memory through the transmission device to execute the following steps: acquiring a target video to be processed, and extracting a plurality of candidate video frames from the target video; acquiring a target evaluation index of each candidate video frame, wherein the target evaluation index is used for representing the quality degree of the corresponding candidate video frame; selecting at least one target video frame from the candidate video frames based on the target evaluation index of each candidate video frame, wherein the target evaluation index of the at least one target video frame exceeds the target evaluation index of the candidate video frames except the at least one target video frame in the candidate video frames; a video cover for the target video is generated based on the at least one target video frame.
Optionally, the processor may further execute the program code of the following steps: obtaining a quality evaluation index of each candidate video frame, including: and detecting the video information of each candidate video frame to obtain the quality evaluation index of each candidate video frame.
Optionally, the processor may further execute the program code of the following steps: detecting sub-video information in the video information through at least one detection module respectively to obtain at least one piece of sub-video information, wherein the at least one detection module corresponds to the at least one piece of sub-video information one to one, each detection module is established based on first requirement data, and the first requirement data is used for expressing the requirement for detecting each piece of sub-video information; a quality assessment indicator for each candidate video frame is determined based on the at least one sub-video information.
Optionally, the processor may further execute the program code of the following steps: and responding to the first adjusting instruction, and adjusting at least one target detection module in the at least one detection module.
Optionally, the processor may further execute the program code of the following steps: and under the condition that the number of the at least one piece of sub-video information is multiple, carrying out weighted summation on the numerical value of each piece of sub-video information and the corresponding weight to obtain the quality evaluation index of each candidate video frame, wherein the weight is used for expressing the influence degree of each piece of sub-video information on the quality evaluation index of each candidate video frame.
Optionally, the processor may further execute the program code of the following steps: the sub-video information includes at least one of: exposure level of each candidate video frame; a degree of ambiguity for each candidate video frame; the degree to which the content of each candidate video frame matches the target content; parameters of the target object in each candidate video frame.
Optionally, the processor may further execute the program code of the following steps: ranking the plurality of candidate video frames based on the quality assessment index of each candidate video frame; and selecting at least one target video frame from the sorted candidate video frames.
Optionally, the processor may further execute the program code of the following steps: and respectively editing at least one target video frame to obtain at least one sub-video cover of the video cover, wherein the at least one target video frame corresponds to the at least one sub-video cover one to one.
Optionally, the processor may further execute the program code of the following steps: and editing each target video frame through at least one editing module respectively to obtain at least one sub-video cover, wherein the at least one editing module corresponds to the at least one sub-video cover one to one, each editing module is established based on second requirement data, and the second requirement data is used for expressing the requirement of editing each target video frame.
Optionally, the processor may further execute the program code of the following steps: and responding to the second adjusting instruction, and adjusting at least one target editing module in the at least one editing module.
Optionally, the processor may further execute the program code of the following steps: respectively cutting at least one target video frame; respectively erasing target display information in at least one target video frame; respectively beautifying at least one target video frame; and respectively generating at least one target video frame into a dynamic image.
Optionally, the processor may further execute the program code of the following steps: acquiring user portrait data; determining tag information that matches the user representation data; and distributing the sub video cover identified by the label information to a client corresponding to the user portrait data.
Optionally, the processor may further execute the program code of the following steps: preprocessing a target video; extracting a plurality of candidate video frames from a target video, including: and extracting a plurality of candidate video frames from the preprocessed target video.
Optionally, the processor may further execute the program code of the following steps: the method comprises the steps of preprocessing a target video, and preprocessing the target video through at least one preprocessing module respectively to obtain at least one preprocessing result, wherein the at least one preprocessing module corresponds to the at least one preprocessing result one by one, each preprocessing module is established based on third requirement data, and the third requirement data is used for representing the requirement of preprocessing the target video.
Optionally, the processor may further execute the program code of the following steps: and responding to the third adjusting instruction, and adjusting at least one target preprocessing module in the at least one preprocessing module.
Optionally, the processor may further execute the program code of the following steps: preprocessing the target video, including at least one of: performing screenshot processing on a target video; detecting a target object in a target video; scoring an image of a target video; carrying out segmentation processing on a target video; and clustering the target video.
It can be understood by those skilled in the art that the structure shown in fig. 17 is only an illustration, and the computer terminal may also be a terminal device such as a smart phone (e.g., an Android phone, an iOS phone, etc.), a tablet computer, a palm computer, a Mobile Internet Device (MID), a PAD, etc. Fig. 17 is a diagram illustrating the structure of the electronic device. For example, the computer terminal a may also include more or fewer components (e.g., network interfaces, display devices, etc.) than shown in fig. 17, or have a different configuration than shown in fig. 17.
Those skilled in the art will appreciate that all or part of the steps in the methods of the above embodiments may be implemented by a program instructing hardware associated with the terminal device, where the program may be stored in a computer-readable storage medium, and the storage medium may include: flash disks, Read-Only memories (ROMs), Random Access Memories (RAMs), magnetic or optical disks, and the like.
Example 17
Embodiments of the present application also provide a computer-readable storage medium. Optionally, in this embodiment, the storage medium may be configured to store program codes executed by the video processing method provided in the foregoing embodiment.
Optionally, in this embodiment, the storage medium may be located in any one of computer terminals in a computer terminal group in a computer network, or in any one of mobile terminals in a mobile terminal group.
Optionally, in this embodiment, the storage medium is configured to store program code for performing the following steps: acquiring a target video to be processed, and extracting a plurality of candidate video frames from the target video; acquiring a target evaluation index of each candidate video frame, wherein the target evaluation index is used for representing the quality degree of the corresponding candidate video frame; selecting at least one target video frame from the candidate video frames based on the target evaluation index of each candidate video frame, wherein the target evaluation index of the at least one target video frame exceeds the target evaluation index of the candidate video frames except the at least one target video frame in the candidate video frames; a video cover for the target video is generated based on the at least one target video frame.
Optionally, the storage medium is further configured to store program code for performing the following steps: obtaining a quality evaluation index of each candidate video frame, including: and detecting the video information of each candidate video frame to obtain the quality evaluation index of each candidate video frame.
Optionally, the storage medium is further configured to store program code for performing the following steps: detecting sub-video information in the video information through at least one detection module respectively to obtain at least one piece of sub-video information, wherein the at least one detection module corresponds to the at least one piece of sub-video information one to one, each detection module is established based on first requirement data, and the first requirement data is used for expressing the requirement for detecting each piece of sub-video information; a quality assessment indicator for each candidate video frame is determined based on the at least one sub-video information.
Optionally, the storage medium is further configured to store program code for performing the following steps: and responding to the first adjusting instruction, and adjusting at least one target detection module in the at least one detection module.
Optionally, the storage medium is further configured to store program code for performing the following steps: and under the condition that the number of the at least one piece of sub-video information is multiple, carrying out weighted summation on the numerical value of each piece of sub-video information and the corresponding weight to obtain the quality evaluation index of each candidate video frame, wherein the weight is used for expressing the influence degree of each piece of sub-video information on the quality evaluation index of each candidate video frame.
Optionally, the storage medium is further configured to store program code for performing the following steps: the sub-video information includes at least one of: exposure level of each candidate video frame; a degree of ambiguity for each candidate video frame; the degree to which the content of each candidate video frame matches the target content; parameters of the target object in each candidate video frame.
Optionally, the storage medium is further configured to store program code for performing the following steps: ranking the plurality of candidate video frames based on the quality assessment index of each candidate video frame; and selecting at least one target video frame from the sorted candidate video frames.
Optionally, the storage medium is further configured to store program code for performing the following steps: and respectively editing at least one target video frame to obtain at least one sub-video cover of the video cover, wherein the at least one target video frame corresponds to the at least one sub-video cover one to one.
Optionally, the storage medium is further configured to store program code for performing the following steps: and editing each target video frame through at least one editing module respectively to obtain at least one sub-video cover, wherein the at least one editing module corresponds to the at least one sub-video cover one to one, each editing module is established based on second requirement data, and the second requirement data is used for expressing the requirement of editing each target video frame.
Optionally, the storage medium is further configured to store program code for performing the following steps: and responding to the second adjusting instruction, and adjusting at least one target editing module in the at least one editing module.
Optionally, the storage medium is further configured to store program code for performing the following steps: respectively cutting at least one target video frame; respectively erasing target display information in at least one target video frame; respectively beautifying at least one target video frame; and respectively generating at least one target video frame into a dynamic image.
Optionally, the storage medium is further configured to store program code for performing the following steps: acquiring user portrait data; determining tag information that matches the user representation data; and distributing the sub video cover identified by the label information to a client corresponding to the user portrait data.
Optionally, the storage medium is further configured to store program code for performing the following steps: preprocessing a target video; extracting a plurality of candidate video frames from a target video, including: and extracting a plurality of candidate video frames from the preprocessed target video.
Optionally, the storage medium is further configured to store program code for performing the following steps: the method comprises the steps of preprocessing a target video, and preprocessing the target video through at least one preprocessing module respectively to obtain at least one preprocessing result, wherein the at least one preprocessing module corresponds to the at least one preprocessing result one by one, each preprocessing module is established based on third requirement data, and the third requirement data is used for representing the requirement of preprocessing the target video.
Optionally, the storage medium is further configured to store program code for performing the following steps: and responding to the third adjusting instruction, and adjusting at least one target preprocessing module in the at least one preprocessing module.
Optionally, the storage medium is further configured to store program code for performing the following steps: preprocessing the target video, including at least one of: performing screenshot processing on a target video; detecting a target object in a target video; scoring an image of a target video; carrying out segmentation processing on a target video; and clustering the target video.
The above-mentioned serial numbers of the embodiments of the present application are merely for description and do not represent the merits of the embodiments.
In the above embodiments of the present application, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the embodiments provided in the present application, it should be understood that the disclosed technology can be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, a division of a unit is merely a division of a logic function, and an actual implementation may have another division, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, units or modules, and may be in an electrical or other form.
Units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and other various media capable of storing program codes.
The foregoing is only a preferred embodiment of the present application and it should be noted that those skilled in the art can make several improvements and modifications without departing from the principle of the present application, and these improvements and modifications should also be considered as the protection scope of the present application.

Claims (15)

1. A video processing method, comprising:
acquiring a target video to be processed, and extracting a plurality of candidate video frames from the target video;
acquiring a target evaluation index of each candidate video frame, wherein the target evaluation index is used for representing the quality degree of the corresponding candidate video frame;
selecting at least one target video frame from the plurality of candidate video frames based on the target evaluation index of each candidate video frame, wherein the target evaluation index of the at least one target video frame exceeds the target evaluation index of the candidate video frames except the at least one target video frame in the plurality of candidate video frames;
generating a video cover of the target video based on the at least one target video frame.
2. The method of claim 1, wherein obtaining a target evaluation indicator for each of the candidate video frames comprises:
and detecting the video information of each candidate video frame to obtain the quality evaluation index of each candidate video frame.
3. The method of claim 2, wherein detecting video information of each of the candidate video frames to obtain a quality assessment indicator of each of the candidate video frames comprises:
detecting sub-video information in the video information through at least one detection module respectively to obtain at least one piece of sub-video information, wherein the at least one detection module corresponds to the at least one piece of sub-video information one to one, each detection module is established based on first requirement data, and the first requirement data is used for representing the requirement for detecting each piece of sub-video information;
determining a quality assessment indicator for each of the candidate video frames based on the at least one sub-video information.
4. The method of claim 3, further comprising:
and responding to the first adjusting instruction, and adjusting at least one target detection module in the at least one detection module.
5. The method of claim 1, wherein generating a video cover for the target video based on the at least one target video frame comprises:
and editing the at least one target video frame respectively to obtain at least one sub-video cover of the video cover, wherein the at least one target video frame is in one-to-one correspondence with the at least one sub-video cover.
6. The method of claim 5, wherein editing the at least one target video frame separately to obtain at least one sub-video cover of the video cover comprises:
and editing each target video frame through at least one editing module to obtain at least one sub-video cover, wherein the at least one editing module corresponds to the at least one sub-video cover one to one, each editing module is established based on second requirement data, and the second requirement data is used for representing the requirement for editing each target video frame.
7. The method of claim 6, further comprising:
and responding to a second adjusting instruction, and adjusting at least one target editing module in the at least one editing module.
8. The method of claim 5, wherein after the at least one target video frame is edited to obtain at least one sub-video cover of the video cover, the method further comprises:
acquiring user portrait data;
determining tag information that matches the user representation data;
and distributing the sub video cover identified by the label information to a client corresponding to the user portrait data.
9. A video processing method, comprising:
responding to an image input instruction acting on an operation interface, and acquiring a target video to be processed;
and responding to a cover generation instruction acting on the operation interface, displaying a video cover of the target video on the operation interface, wherein the video cover is generated by at least one target video frame, the at least one target video frame is selected from a plurality of candidate video frames based on a target evaluation index of each candidate video frame in the plurality of candidate video frames, the plurality of candidate video frames are extracted from the target video, the target evaluation index is used for representing the goodness and badness of the corresponding candidate video frame, and the at least one target video frame is better than the candidate video frames except the at least one target video frame in the plurality of candidate video frames.
10. A video processing method, comprising:
acquiring a monitoring video of a road section, and extracting a plurality of candidate video frames from the monitoring video, wherein the candidate video frames comprise information of vehicles running through the road section;
acquiring a target evaluation index of each candidate video frame, wherein the target evaluation index is used for representing the quality degree of the corresponding candidate video frame;
selecting at least one target video frame from the plurality of candidate video frames based on the target evaluation index of each candidate video frame, wherein the target evaluation index of the at least one target video frame exceeds the target evaluation index of the candidate video frames except the at least one target video frame in the plurality of candidate video frames;
generating a video cover of the monitoring video based on the at least one target video frame, wherein the video cover includes target vehicle information for traveling through the road segment.
11. A video processing method, comprising:
the method comprises the steps of obtaining a teaching video and extracting a plurality of candidate video frames from the teaching video, wherein the candidate video frames comprise object information of different teaching objects, and the teaching objects comprise teachers and teaching contents of different types;
acquiring a target evaluation index of each candidate video frame, wherein the target evaluation index is used for representing the quality degree of the corresponding candidate video frame;
selecting at least one target video frame from the plurality of candidate video frames based on the target evaluation index of each candidate video frame, wherein the target evaluation index of the at least one target video frame exceeds the target evaluation index of the candidate video frames except the at least one target video frame in the plurality of candidate video frames;
generating a video cover of the teaching video based on the at least one target video frame, wherein the video cover includes object information of a target teaching object.
12. A video processing method, comprising:
acquiring a live broadcast video from a live broadcast platform, and extracting a plurality of candidate video frames from the live broadcast video;
acquiring a target evaluation index of each candidate video frame, wherein the target evaluation index is used for representing the quality degree of the corresponding candidate video frame;
selecting at least one target video frame from the plurality of candidate video frames based on the target evaluation index of each candidate video frame, wherein the target evaluation index of the at least one target video frame exceeds the target evaluation index of the candidate video frames except the at least one target video frame in the plurality of candidate video frames;
generating a video cover of the live video based on the at least one target video frame;
and issuing the video cover of the live video to the live platform for display.
13. A video processing method, comprising:
a client acquires a target video to be processed;
the client uploads the target video to a server;
the client receives a video cover of the target video returned by the server, wherein the video cover is generated by the server based on at least one target video frame, the at least one target video frame is a target evaluation index of the server based on each candidate video frame in a plurality of candidate video frames, the target evaluation index is selected from the candidate video frames, the candidate video frames are extracted from the target video by the server, the target evaluation index is used for representing the quality degree of the corresponding candidate video frame, and the target evaluation index of the at least one target video frame exceeds the target evaluation index of the candidate video frames except the at least one target video frame in the candidate video frames.
14. A video processing method, comprising:
acquiring a target video to be processed by calling a first interface, and extracting a plurality of candidate video frames from the target video, wherein the first interface comprises a first parameter, and a parameter value of the first parameter is the target video;
acquiring a target evaluation index of each candidate video frame, wherein the target evaluation index is used for representing the quality degree of the corresponding candidate video frame;
selecting at least one target video frame from the plurality of candidate video frames based on the target evaluation index of each candidate video frame, wherein the target evaluation index of the at least one target video frame exceeds the target evaluation index of the candidate video frames except the at least one target video frame in the plurality of candidate video frames;
generating a video cover of the target video based on the at least one target video frame;
and outputting the video cover of the target video by calling a second interface, wherein the second interface comprises a second parameter, and the parameter value of the second parameter is the video cover of the target video.
15. A computer-readable storage medium, comprising a stored program, wherein the program, when executed by a processor, controls an apparatus in which the computer-readable storage medium is located to perform the method of any of claims 1-14.
CN202110821055.9A 2021-07-20 2021-07-20 Video processing method and storage medium Pending CN113784152A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110821055.9A CN113784152A (en) 2021-07-20 2021-07-20 Video processing method and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110821055.9A CN113784152A (en) 2021-07-20 2021-07-20 Video processing method and storage medium

Publications (1)

Publication Number Publication Date
CN113784152A true CN113784152A (en) 2021-12-10

Family

ID=78836147

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110821055.9A Pending CN113784152A (en) 2021-07-20 2021-07-20 Video processing method and storage medium

Country Status (1)

Country Link
CN (1) CN113784152A (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107832725A (en) * 2017-11-17 2018-03-23 北京奇虎科技有限公司 Video front cover extracting method and device based on evaluation index
CN108650524A (en) * 2018-05-23 2018-10-12 腾讯科技(深圳)有限公司 Video cover generation method, device, computer equipment and storage medium
CN108833942A (en) * 2018-06-28 2018-11-16 北京达佳互联信息技术有限公司 Video cover choosing method, device, computer equipment and storage medium
CN108965922A (en) * 2018-08-22 2018-12-07 广州酷狗计算机科技有限公司 Video cover generation method, device and storage medium
CN109996091A (en) * 2019-03-28 2019-07-09 苏州八叉树智能科技有限公司 Generate method, apparatus, electronic equipment and the computer readable storage medium of video cover
CN110879851A (en) * 2019-10-15 2020-03-13 北京三快在线科技有限公司 Video dynamic cover generation method and device, electronic equipment and readable storage medium
CN110909205A (en) * 2019-11-22 2020-03-24 北京金山云网络技术有限公司 Video cover determination method and device, electronic equipment and readable storage medium
CN111935505A (en) * 2020-07-29 2020-11-13 广州华多网络科技有限公司 Video cover generation method, device, equipment and storage medium

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107832725A (en) * 2017-11-17 2018-03-23 北京奇虎科技有限公司 Video front cover extracting method and device based on evaluation index
CN108650524A (en) * 2018-05-23 2018-10-12 腾讯科技(深圳)有限公司 Video cover generation method, device, computer equipment and storage medium
CN108833942A (en) * 2018-06-28 2018-11-16 北京达佳互联信息技术有限公司 Video cover choosing method, device, computer equipment and storage medium
CN108965922A (en) * 2018-08-22 2018-12-07 广州酷狗计算机科技有限公司 Video cover generation method, device and storage medium
CN109996091A (en) * 2019-03-28 2019-07-09 苏州八叉树智能科技有限公司 Generate method, apparatus, electronic equipment and the computer readable storage medium of video cover
CN110879851A (en) * 2019-10-15 2020-03-13 北京三快在线科技有限公司 Video dynamic cover generation method and device, electronic equipment and readable storage medium
CN110909205A (en) * 2019-11-22 2020-03-24 北京金山云网络技术有限公司 Video cover determination method and device, electronic equipment and readable storage medium
CN111935505A (en) * 2020-07-29 2020-11-13 广州华多网络科技有限公司 Video cover generation method, device, equipment and storage medium

Similar Documents

Publication Publication Date Title
CN110166827B (en) Video clip determination method and device, storage medium and electronic device
US11914639B2 (en) Multimedia resource matching method and apparatus, storage medium, and electronic apparatus
CN105373938A (en) Method for identifying commodity in video image and displaying information, device and system
CN108236784B (en) Model training method and device, storage medium and electronic device
US20220172476A1 (en) Video similarity detection method, apparatus, and device
CN104837059A (en) Video processing method, device and system
CN113840049A (en) Image processing method, video flow scene switching method, device, equipment and medium
CN113382301A (en) Video processing method, storage medium and processor
CN111739027A (en) Image processing method, device and equipment and readable storage medium
KR102241486B1 (en) Method that provides and creates mosaic image based on image tag-word
CN113627402B (en) Image identification method and related device
CN111432206A (en) Video definition processing method and device based on artificial intelligence and electronic equipment
CN112102157A (en) Video face changing method, electronic device and computer readable storage medium
CN114845149B (en) Video clip method, video recommendation method, device, equipment and medium
CN114372172A (en) Method and device for generating video cover image, computer equipment and storage medium
CN113269781A (en) Data generation method and device and electronic equipment
CN113784152A (en) Video processing method and storage medium
Ejaz et al. Video summarization by employing visual saliency in a sufficient content change method
CN114500879A (en) Video data processing method, device, equipment and storage medium
CN114237800A (en) File processing method, file processing device, electronic device and medium
CN112449249A (en) Video stream processing method and device, electronic equipment and storage medium
CN112312207A (en) Method, device and equipment for getting through traffic between smart television terminal and mobile terminal
CN112925972B (en) Information pushing method, device, electronic equipment and storage medium
CN115858854B (en) Video data sorting method and device, electronic equipment and storage medium
CN113542866B (en) Video processing method, device, equipment and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination