CN114220048A - Method and apparatus for video content processing, electronic device, and storage medium - Google Patents

Method and apparatus for video content processing, electronic device, and storage medium Download PDF

Info

Publication number
CN114220048A
CN114220048A CN202111452420.XA CN202111452420A CN114220048A CN 114220048 A CN114220048 A CN 114220048A CN 202111452420 A CN202111452420 A CN 202111452420A CN 114220048 A CN114220048 A CN 114220048A
Authority
CN
China
Prior art keywords
video frame
frame sequence
video
sequence
time point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111452420.XA
Other languages
Chinese (zh)
Inventor
王赛赛
晋瑞锦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing QIYI Century Science and Technology Co Ltd
Original Assignee
Beijing QIYI Century Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing QIYI Century Science and Technology Co Ltd filed Critical Beijing QIYI Century Science and Technology Co Ltd
Priority to CN202111452420.XA priority Critical patent/CN114220048A/en
Publication of CN114220048A publication Critical patent/CN114220048A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/78Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/783Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content

Abstract

The application provides a method and a device for processing video content, an electronic device and a storage medium, wherein the method comprises the following steps: acquiring a transition time point of a video to be processed; acquiring a first video frame sequence and a second video frame sequence corresponding to the transition time point; acquiring a sequence group consisting of a first video frame sequence and a second video frame sequence, calculating the brightness characteristic distance between every two video frames in the sequence group, and determining the characteristic maximum value; and comparing the numerical value magnitude relation between the characteristic maximum value and a first preset threshold value, and performing merging processing on the first video frame sequence and the second video frame sequence based on the comparison result. By the method and the device, the problems that in the related technology, the point location of the starting time and the ending time of the video is determined inaccurately and the splitting of the video content is unreasonable are solved.

Description

Method and apparatus for video content processing, electronic device, and storage medium
Technical Field
The present application relates to the field of video processing, and in particular, to a method and an apparatus for processing video content, an electronic device, and a storage medium.
Background
At present, the video platform often uses the segmentation result of the video shot, and the segmentation result is put into use after being further processed. For example: all shot segments in a long video need to be detected, the segments are segmented, and after further detection of the tags, the shot start-stop frames and the tags are stored in a video material library together. When materials are manufactured subsequently, the complete shot segments containing a certain label can be directly called from the library, then operations such as mixed shearing, synthesis and the like are carried out, and finally the video card segments are distributed and popularized in the station or distributed outside the station for drainage.
In the existing video splitting scheme, only the pixel value characteristics of a picture frame are considered, such as a full black screen, but under special conditions, plot association may exist in the content adjacent to the left and right of the black screen, and the splitting which simply depends on the existing algorithm cannot well form a fragment with coherent content, so that the starting and ending time points of the split fragment are unreasonable.
Disclosure of Invention
The application provides a method and a device for processing video content, electronic equipment and a storage medium, which are used for at least solving the problems that in the related technology, the determination of the starting and ending time point positions of a video is not accurate enough and the splitting of the video content is unreasonable.
According to an aspect of an embodiment of the present application, there is provided a method of video content processing, the method including:
acquiring a transition time point of a video to be processed;
acquiring a first video frame sequence and a second video frame sequence corresponding to the transition time point, wherein the first video frame sequence comprises a video frame sequence adjacent to the left of the time sequence of the transition time point, and the second video frame sequence comprises a video frame sequence adjacent to the right of the time sequence of the transition time point;
acquiring a sequence group consisting of the first video frame sequence and the second video frame sequence, calculating a brightness characteristic distance between every two video frames in the sequence group, and determining a characteristic maximum value, wherein the characteristic maximum value comprises the maximum value of the brightness characteristic distance and the minimum value of the brightness characteristic distance;
and comparing the numerical value magnitude relation between the characteristic maximum value and a first preset threshold value, and performing merging processing on the first video frame sequence and the second video frame sequence based on the comparison result.
There is also provided, in accordance with another aspect of an embodiment of the present application, a method of video material generation, the method including:
acquiring a positioning identifier of the transition time point in the video to be processed;
and merging the video frame sequence positioned in front of the positioning identifier and the video frame sequence positioned behind the positioning identifier to generate a merged video material.
According to another aspect of the embodiments of the present application, there is also provided an apparatus for video content processing, the apparatus including:
the first acquisition unit is used for acquiring transition time points of the video to be processed;
a second obtaining unit, configured to obtain a first video frame sequence and a second video frame sequence corresponding to the transition time point, where the first video frame sequence includes a video frame sequence temporally left adjacent to the transition time point, and the second video frame sequence includes a video frame sequence temporally right adjacent to the transition time point;
a third obtaining unit, configured to obtain a sequence group consisting of the first video frame sequence and the second video frame sequence, calculate a luminance characteristic distance between every two video frames in the sequence group, and determine a characteristic maximum value, where the characteristic maximum value includes a maximum value of the luminance characteristic distance and a minimum value of the luminance characteristic distance;
and the first merging unit is used for comparing the numerical value magnitude relation between the characteristic maximum value and a first preset threshold value and merging the first video frame sequence and the second video frame sequence based on the comparison result.
According to still another aspect of the embodiments of the present application, there is also provided an apparatus for video material generation, which performs video material generation using respective units and modules of an apparatus for video content processing, the apparatus including:
a seventh obtaining unit, configured to obtain a location identifier of the transition time point in the video to be processed;
and the second merging unit is used for merging the video frame sequence positioned in front of the positioning identifier and the video frame sequence positioned behind the positioning identifier to generate a merged video material.
According to another aspect of the embodiments of the present application, there is also provided an electronic device, including a processor, a communication interface, a memory, and a communication bus, where the processor, the communication interface, and the memory communicate with each other through the communication bus; wherein the memory is used for storing the computer program; a processor for performing the method steps in any of the above embodiments by running the computer program stored on the memory.
According to a further aspect of the embodiments of the present application, there is also provided a computer-readable storage medium, in which a computer program is stored, wherein the computer program is configured to perform the method steps of any of the above embodiments when the computer program is executed.
In the embodiment of the application, the transition time point of the video to be processed is obtained; acquiring a first video frame sequence and a second video frame sequence corresponding to the transition time point, wherein the first video frame sequence comprises a video frame sequence adjacent to the left of the time sequence of the transition time point, and the second video frame sequence comprises a video frame sequence adjacent to the right of the time sequence of the transition time point; acquiring a sequence group consisting of a first video frame sequence and a second video frame sequence, calculating a brightness characteristic distance between every two video frames in the sequence group, and determining a characteristic maximum value, wherein the characteristic maximum value comprises the maximum value of the brightness characteristic distance and the minimum value of the brightness characteristic distance; and comparing the numerical value magnitude relation between the characteristic maximum value and a first preset threshold value, and performing merging processing on the first video frame sequence and the second video frame sequence based on the comparison result. According to the embodiment of the application, the luminance characteristic distance between every two video frames in the sequence group is calculated through the left adjacent video frame sequence and the right adjacent video frame sequence at the transition time point to obtain the characteristic maximum value, the characteristic maximum value is compared with the first preset threshold value, and whether the left adjacent video frame sequence and the right adjacent video frame sequence are combined or not is determined, so that the situation that the video content continuity is poor due to unreasonable starting and ending time point positions of the video split segments can be avoided, more available complete shot segments are provided, the purpose of providing more available materials for mixed shearing is achieved, and the problems that the video starting and ending time point positions are determined inaccurately and the video content splitting is unreasonable in the related technology are solved.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the invention and together with the description, serve to explain the principles of the invention.
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious for those skilled in the art that other drawings can be obtained according to the drawings without inventive exercise.
FIG. 1 is a schematic diagram of a hardware environment for an alternative method of video content processing according to an embodiment of the present invention;
FIG. 2 is a schematic flow chart diagram of an alternative method of video content processing according to an embodiment of the present application;
FIG. 3 is a schematic overall flow chart of an alternative method for determining a lens change point according to an embodiment of the present application;
FIG. 4 is a block diagram of an alternative video content processing apparatus according to an embodiment of the present application;
fig. 5 is a block diagram of an alternative electronic device according to an embodiment of the present application.
Detailed Description
In order to make the technical solutions better understood by those skilled in the art, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only partial embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
It should be noted that the terms "first," "second," and the like in the description and claims of this application and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the application described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
According to an aspect of an embodiment of the present application, there is provided a method of video content processing. Alternatively, in the present embodiment, the method for processing video content described above may be applied to a hardware environment as shown in fig. 1. As shown in fig. 1, the terminal 102 may include a memory 104, a processor 106, and a display 108 (optional components). The terminal 102 may be communicatively coupled to a server 112 via a network 110, the server 112 may be configured to provide services (e.g., gaming services, application services, etc.) to the terminal or to clients installed on the terminal, and a database 114 may be provided on the server 112 or separate from the server 112 to provide data storage services to the server 112. Additionally, a processing engine 116 may be run in the server 112, and the processing engine 116 may be used to perform the steps performed by the server 112.
Alternatively, the terminal 102 may be, but is not limited to, a terminal capable of calculating data, such as a mobile terminal (e.g., a mobile phone, a tablet Computer), a notebook Computer, a PC (Personal Computer) Computer, and the like, and the network may include, but is not limited to, a wireless network or a wired network. Wherein, this wireless network includes: bluetooth, WIFI (Wireless Fidelity), and other networks that enable Wireless communication. Such wired networks may include, but are not limited to: wide area networks, metropolitan area networks, and local area networks. The server 112 may include, but is not limited to, any hardware device capable of performing computations.
In addition, in the present embodiment, the method for processing video content can also be applied, but not limited to, to an independent processing device with a relatively high processing capability without data interaction. For example, the processing device may be, but is not limited to, a terminal device with a relatively high processing capability, that is, the operations in the above-mentioned method for processing video content may be integrated into a single processing device. The above is merely an example, and this is not limited in this embodiment.
Optionally, in this embodiment, the method for processing the video content may be executed by the server 112, may be executed by the terminal 102, or may be executed by both the server 112 and the terminal 102. The method for the terminal 102 to execute the video content processing according to the embodiment of the present application may also be executed by a client installed thereon.
Taking an example of the method running on a server, fig. 2 is a schematic flowchart of an optional method for processing video content according to an embodiment of the present application, and as shown in fig. 2, the method may include the following steps:
step S201, acquiring a transition time point of a video to be processed;
step S202, a first video frame sequence and a second video frame sequence corresponding to the transition time point are obtained, wherein the first video frame sequence comprises a video frame sequence adjacent to the left of the time sequence of the transition time point, and the second video frame sequence comprises a video frame sequence adjacent to the right of the time sequence of the transition time point;
step S203, obtaining a sequence group consisting of a first video frame sequence and a second video frame sequence, calculating a brightness characteristic distance between every two video frames in the sequence group, and determining a characteristic maximum value, wherein the characteristic maximum value comprises a maximum value of the brightness characteristic distance and a minimum value of the brightness characteristic distance;
step S204, comparing the numerical value size relationship between the characteristic maximum value and a first preset threshold value, and merging the first video frame sequence and the second video frame sequence based on the comparison result.
Optionally, in this embodiment of the present application, first, all video frames of a video are extracted according to a frame rate of the video, a transition time point of the video, which is a time point corresponding to a video frame where a black screen appears at a time point where transition occurs, is obtained, and then a video frame sequence adjacent to the left of the transition time point when arranged according to a time sequence is obtained as a first video frame sequence, and a video frame sequence adjacent to the right of the transition time point is obtained as a second video frame sequence.
Obtaining a sequence group formed by a first video frame sequence and a second video frame sequence, then extracting histogram features of all video frames in the sequence group, and storing time stamps of all the video frames and the histogram features of the video frames in units of seconds { '1': and (4) calculating the brightness characteristic distance between every two video frames according to the histogram characteristics to obtain a characteristic most value, wherein the characteristic most value comprises the maximum value of the brightness characteristic distance and the minimum value of the brightness characteristic distance.
In this embodiment, a first preset threshold is set, where the first preset threshold is a critical value for determining content continuity between a first video frame sequence and a second video frame sequence, and the obtained characteristic maximum value is compared with the first preset threshold in terms of value, and based on a comparison result, when the comparison result satisfies a merging condition, it is determined that the first video frame sequence and the second video frame sequence are merged.
In the embodiment of the application, the transition time point of the video to be processed is obtained; acquiring a first video frame sequence and a second video frame sequence corresponding to the transition time point, wherein the first video frame sequence comprises a video frame sequence adjacent to the left of the time sequence of the transition time point, and the second video frame sequence comprises a video frame sequence adjacent to the right of the time sequence of the transition time point; acquiring a sequence group consisting of a first video frame sequence and a second video frame sequence, calculating a brightness characteristic distance between every two video frames in the sequence group, and determining a characteristic maximum value, wherein the characteristic maximum value comprises the maximum value of the brightness characteristic distance and the minimum value of the brightness characteristic distance; and comparing the numerical value magnitude relation between the characteristic maximum value and a first preset threshold value, and performing merging processing on the first video frame sequence and the second video frame sequence based on the comparison result. According to the embodiment of the application, the luminance characteristic distance between every two video frames in the sequence group is calculated through the left adjacent video frame sequence and the right adjacent video frame sequence at the transition time point to obtain the characteristic maximum value, the characteristic maximum value is compared with the first preset threshold value, and whether the left adjacent video frame sequence and the right adjacent video frame sequence are combined or not is determined, so that the situation that the video content continuity is poor due to unreasonable starting and ending time point positions of the video split segments can be avoided, more available complete shot segments are provided, the purpose of providing more available materials for mixed shearing is achieved, and the problems that the video starting and ending time point positions are determined inaccurately and the video content splitting is unreasonable in the related technology are solved.
As an alternative embodiment, comparing the value magnitude relationship between the feature maximum value and the first preset threshold, and merging the first video frame sequence and the second video frame sequence based on the comparison result includes:
comparing the maximum value of the brightness characteristic distance with a first sub-threshold value to obtain a first comparison result, wherein the first preset threshold value comprises the first sub-threshold value;
comparing the minimum value of the brightness characteristic distance with a second sub-threshold value to obtain a second comparison result, wherein the first preset threshold value comprises the second sub-threshold value;
and determining to perform merging processing on the first video frame sequence and the second video frame sequence according to the first comparison result and the second comparison result.
Optionally, the first preset threshold includes two sub-thresholds, namely a first sub-threshold and a second sub-threshold, the maximum value of the luminance characteristic distance is compared with the first sub-threshold, and if the maximum value of the luminance characteristic distance is smaller than the first sub-threshold, a first comparison result is obtained; and comparing the minimum value of the brightness characteristic distance with a second sub-threshold, obtaining a second comparison result if the minimum value of the brightness characteristic distance is smaller than the second sub-threshold, and determining to combine the first video frame sequence and the second video frame sequence after the first comparison result and the second comparison result are simultaneously met.
In the embodiment of the application, the maximum value of the brightness characteristic distance is compared with the first sub-threshold, the minimum value of the brightness characteristic distance is compared with the second sub-threshold, and the first video frame sequence and the second video frame sequence are determined to be merged according to the comparison result, so that the identification of the video content at the frame level can be realized, and more available complete shot segments can be obtained.
As an alternative embodiment, before comparing the numerical magnitude relationship between the feature maximum and the first preset threshold, the method further comprises:
determining the channel type of a current scene picture;
and determining a first sub-threshold and a second sub-threshold according to the channel type, wherein the first sub-threshold and/or the second sub-threshold are/is obtained by the channel coefficient corresponding to the channel type.
Optionally, it may be determined empirically which channel type the scene picture in the preamble group belongs to, for example, anaglyphs, cartoons, dramas, etc., and then the values of the first sub-threshold and the second sub-threshold are set according to the channel type to which the current picture belongs. The first sub-threshold and/or the second sub-threshold are obtained from a channel coefficient corresponding to the channel type, and preferably, the channel coefficient corresponding to the channel type is multiplied by an average value of luminance characteristic distances between every two video frames in the sequence group, for example, the first sub-threshold is set to 0.65, and the second sub-threshold is set to 0.2, 0.2 × the average value, and the like.
Meanwhile, the association between the first sub-threshold and the second sub-threshold and the channel type and the judgment condition may be shown by the following codes, and it is determined whether the first video frame sequence and the second video frame sequence need to be merged by the judgment condition:
Figure BDA0003386682470000091
Figure BDA0003386682470000101
in the embodiment of the application, the first sub-threshold and the second sub-threshold are obtained according to the channel type of the current scene picture, so that the determined merging judgment condition between the video frames is more accurate, and the complete and coherent shot content can be obtained more conveniently.
As an alternative embodiment, the method further comprises:
taking a time point of a black screen frame in all video frames of a video to be processed as a transition time point, wherein a second video frame sequence corresponding to the transition time point contains a mutation lens under the condition that a first video frame sequence corresponding to the transition time point contains a gradual change lens; or, in the case that the first video frame sequence corresponding to the transition time point includes a sudden-change shot, the second video frame sequence corresponding to the transition time point includes a gradual-change shot.
Optionally, in this embodiment of the present application, a time point at which a black screen frame exists in all video frames of a video to be processed may be used as a transition time point, and since a black screen frame generally occurs due to switching of a shot, particularly at a time point at which a sudden change shot and a gradual change shot are switched, in a case that a first video frame sequence corresponding to the transition time point includes a gradual change shot, a second video frame sequence corresponding to the transition time point includes a sudden change shot; or, in the case that the first video frame sequence corresponding to the transition time point includes a sudden-change shot, the second video frame sequence corresponding to the transition time point includes a gradual-change shot.
As an alternative embodiment, the manner of determining the fade shot includes:
acquiring a first characteristic distance between a starting frame and an ending frame of the first video frame sequence or the second video frame sequence;
under the condition that the first characteristic distance is greater than a second preset threshold value, acquiring any two video frames in the first video frame sequence or the second video frame sequence, wherein the second preset threshold value is obtained by calculating the characteristic distance of each pairwise combination of the video frames in the first video frame sequence or the second video frame sequence, and the distance between the two video frames in the first video frame sequence or the second video frame sequence is smaller than a preset value;
determining a second characteristic distance between the two video frames, and determining that the first video frame sequence or the second video frame sequence contains a gradient lens under the condition that the second characteristic distance is smaller than a third preset threshold, wherein the third preset threshold is a product of the first characteristic distance and a first preset coefficient, and the first preset coefficient is a numerical value for carrying out quantization adjustment on the first characteristic distance.
Optionally, in this embodiment of the present application, a first feature distance between a start frame and an end frame of the first video frame sequence or the second video frame sequence is obtained, where the first feature distance is a histogram feature distance, and it is understood that a histogram is used to characterize the brightness of a current frame of the video, and the histogram feature distance is also calculated to calculate the brightness value between two video frames.
The embodiments of the present application may store, in units of seconds, the time stamps of all video frames in the first video frame sequence or the second video frame sequence and the histogram features of the video frames corresponding to the current time stamps, and use { 'video frame 1': [ frame _ feature ], … ], and then calculates the histogram feature distance between two video frames, D ═ D (1, 2), D (k +1, k +2), …, D (n-1, n) }, where n represents the total number of frames in the first video frame sequence or the second video frame sequence. Then, a second preset threshold is set in advance in the embodiment of the present application, where the second preset threshold may be a numerical value obtained by calculating an average value of feature distances of all histograms after pairwise combination of all video frames in the set D.
In a case that the first characteristic distance is greater than the second preset threshold, it is considered that a shot change occurs in the first video frame sequence or the second video frame sequence, and at this time, any two video frames in the first video frame sequence or the second video frame sequence are acquired, it should be noted that a distance between the two video frames in the first video frame sequence or the second video frame sequence should be smaller than a preset value, for example, the preset value is 3 frames, and thus the two obtained video frames are not too far apart.
And calculating a second characteristic distance between the two selected video frames, and then comparing the second characteristic distance with a third preset threshold value.
Since two video frames with a distance smaller than the preset value are selected between every two video frames in the embodiment of the present application, the range determined only by the first characteristic distance may have magnitude change, so to perform numerical quantization on the first characteristic distance, a first preset coefficient may be set to adjust the range determined by the first characteristic distance, for example, the first preset coefficient is set to 0.5, and the product of the first characteristic distance multiplied by the first preset coefficient may be used as a third preset threshold, and then the numerical value of the second characteristic distance is compared with the third preset threshold. And under the condition that the second characteristic distance is determined to be smaller than a third preset threshold, the shot between the two video frames is a gradient shot, namely the first video frame sequence or the second video frame sequence comprises the gradient shot.
According to the embodiment of the application, whether the first video frame sequence or the second video frame sequence contains the gradient lens or not is judged according to the characteristics of the distance value, so that the aim of quickly identifying the gradient lens can be fulfilled.
As an alternative embodiment, obtaining any two video frames of the first video frame sequence or the second video frame sequence comprises:
two adjacent video frames in the first video frame sequence or the second video frame sequence are obtained.
Optionally, in this embodiment of the application, selecting a position relationship between two video frames in the first video frame sequence or the second video frame sequence is further limited, that is, selecting two adjacent video frames as two finally acquired video frames.
In the embodiment of the application, the specific positions of the two video frames when the shot picture changes are more accurately defined by selecting the two adjacent video frames to calculate the second characteristic distance.
As an alternative embodiment, in the case that the first characteristic distance is greater than the second preset threshold, the second preset threshold is obtained from any two video frames in the first video frame sequence or the second video frame sequence by:
after acquiring pairwise combination of each video frame in the first video frame sequence or the second video frame sequence, calculating to obtain a plurality of third characteristic distances;
averaging the plurality of third characteristic distances to obtain a second preset threshold;
alternatively, the first and second electrodes may be,
acquiring a second preset coefficient, wherein the second preset coefficient is used for representing the weight of the first video frame sequence or the second video frame sequence containing the changed shot;
and determining the product of the mean value and a second preset coefficient as a second preset threshold value.
Optionally, when the second preset threshold of the above embodiment is obtained, after pairwise combination of each video frame in the first video frame sequence or the second video frame sequence is obtained, a plurality of third feature distances are calculated; and averaging the plurality of third characteristic distances to obtain a second preset threshold value.
Or a second preset coefficient may be set according to historical experience or by the video shot tester, where the second preset coefficient is used to represent the weight of the first video frame sequence or the second video frame sequence including the changed shot, such as 0.5, and then the product of the obtained average value and the second preset coefficient 0.5 is used as a second preset threshold.
As an alternative embodiment, the manner of determining the abrupt shot includes:
acquiring a fourth characteristic distance between a key video frame in the first video frame sequence or the second video frame sequence and a frame behind or in front of the key video frame, wherein the key video frame is a black screen frame of a last frame of a video clip or a first video frame sequence or a second video frame sequence when a scene picture in the first video frame sequence or the second video frame sequence changes;
comparing the fourth characteristic distance with a third preset threshold value, wherein the third preset threshold value is the product of the second preset threshold value and a channel coefficient corresponding to the current scene picture;
and determining that the first video frame sequence or the second video frame sequence contains the abrupt shot when the fourth characteristic distance is larger than a third preset threshold value.
Optionally, a key video frame in the first video frame sequence or the second video frame sequence is extracted, a frame located behind the key video frame or a frame located in front of the key video frame is simultaneously acquired, a feature distance between the key video frame and the frame located behind the key video frame or between the key video frame and the frame located in front of the key video frame is calculated to be used as a fourth feature distance, and then the fourth feature distance is compared with a third preset threshold, where the third preset threshold is a product of a second preset threshold obtained in the foregoing embodiment and a channel coefficient corresponding to the current scene picture. It should be noted that the channel coefficients may be: 0.6 for TV drama, 0.8 for movie, 1 for anarchis, 0.9 for cartoon, etc.
In addition, the key frame may be a last frame of the video clip when the scene picture in the first video frame sequence or the second video frame sequence changes, or a black screen frame in which the scene picture no longer exists in the first video frame sequence or the second video frame sequence.
If the fourth characteristic distance is larger than a third preset threshold value, determining that the first video frame sequence or the second video frame sequence contains a mutation lens, and further obtaining a mutation lens set; and if the fourth characteristic distance is smaller than a third preset threshold value, determining that no lens changes.
According to the embodiment of the application, whether the first video frame sequence or the second video frame sequence contains the mutation shot or not is judged according to the characteristics of the distance value, so that the purpose of quickly identifying the mutation shot can be realized.
As an alternative embodiment, the present application further provides a method for generating a video material, where the method for generating a video material uses the video content processing methods of the foregoing embodiments, and includes:
acquiring a positioning identifier of a transition time point in a video to be processed;
and merging the video frame sequence before the positioning identifier and the video frame sequence after the positioning identifier to generate a merged video material.
Optionally, a positioning identifier of the transition time point in the video to be processed is determined, for example, represented by a character x, and at this time, the video frame sequence before the positioning identifier and the video frame sequence after the positioning identifier are merged to generate a merged video material, so that a plurality of complete video materials can be obtained.
In the embodiment of the application, the positioning identifier is used for merging the video frame sequence before the positioning identifier and the video frame sequence after the positioning identifier, so that the video material inventory of each channel can be greatly improved, and more materials are provided for subsequent operations such as video mixing and cutting.
As an alternative embodiment, as shown in fig. 3, fig. 3 is a schematic overall flow chart of an alternative determination of a lens change point according to an embodiment of the present application, where the flow chart includes:
acquiring original video and channel information; determining a gradient lens;
extracting video key frame and calculating characteristic distance Dkey(ii) a Obtaining a mutation lens;
obtaining a gradual change lens and a sudden change lens set;
determining whether moving shots exist in the two shot sets or not according to all video frame characteristics and corresponding files of the timestamps; if yes, there is a lens change point.
It should be noted that, for simplicity of description, the above-mentioned method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present application is not limited by the order of acts described, as some steps may occur in other orders or concurrently depending on the application. Further, those skilled in the art should also appreciate that the embodiments described in the specification are preferred embodiments and that the acts and modules referred to are not necessarily required in this application.
Through the above description of the embodiments, those skilled in the art can clearly understand that the method according to the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but the former is a better implementation mode in many cases. Based on such understanding, the technical solutions of the present application may be embodied in the form of a software product, which is stored in a storage medium (e.g., a ROM (Read-Only Memory)/RAM (Random Access Memory), a magnetic disk, an optical disk) and includes several instructions for enabling a terminal device (e.g., a mobile phone, a computer, a server, or a network device) to execute the methods of the embodiments of the present application.
According to another aspect of the embodiments of the present application, there is also provided a method and apparatus for video content processing for implementing the method for video content processing. Fig. 4 is a block diagram of an alternative video content processing method and apparatus according to an embodiment of the present application, and as shown in fig. 4, the apparatus may include:
a first obtaining unit 401, configured to obtain a transition time point of a video to be processed;
a second obtaining unit 402, connected to the first obtaining unit 401, configured to obtain a first video frame sequence and a second video frame sequence corresponding to a transition time point, where the first video frame sequence includes a video frame sequence adjacent to the time sequence left of the transition time point, and the second video frame sequence includes a video frame sequence adjacent to the time sequence right of the transition time point;
a third obtaining unit 403, connected to the second obtaining unit 402, configured to obtain a sequence group formed by the first video frame sequence and the second video frame sequence, calculate a luminance characteristic distance between every two video frames in the sequence group, and determine a characteristic maximum value, where the characteristic maximum value includes a maximum value of the luminance characteristic distance and a minimum value of the luminance characteristic distance;
and a first merging unit 404, connected to the third obtaining unit 403, configured to compare a numerical magnitude relationship between the feature maximum and a first preset threshold, and perform merging processing on the first video frame sequence and the second video frame sequence based on a comparison result.
It should be noted that the first obtaining unit 401 in this embodiment may be configured to execute the step S201, the second obtaining unit 402 in this embodiment may be configured to execute the step S202, the third obtaining unit 403 in this embodiment may be configured to execute the step S203, and the first merging unit 404 in this embodiment may be configured to execute the step S204.
Through the module, the luminance characteristic distance between every two video frames in the sequence group is calculated through the left adjacent video frame sequence and the right adjacent video frame sequence of the transition time point to obtain the characteristic maximum value, the characteristic maximum value is compared with a first preset threshold value to determine whether the left adjacent video frame sequence and the right adjacent video frame sequence are combined, so that the situation that the video content continuity is poor due to unreasonable starting and ending time point positions of the video split segments can be avoided, more available complete shot segments are provided, the purpose of providing more available materials for mixed shearing is achieved, and the problems that the video starting and ending time point positions are determined inaccurately and the video content splitting is unreasonable in the related technology are solved.
As an alternative embodiment, the first merging unit includes: the first comparison module is used for comparing the maximum value of the brightness characteristic distance with a first sub-threshold value to obtain a first comparison result, wherein the first preset threshold value comprises the first sub-threshold value; the second comparison module is used for comparing the minimum value of the brightness characteristic distance with a second sub-threshold value to obtain a second comparison result, wherein the first preset threshold value comprises the second sub-threshold value; and the first determining module is used for determining to carry out merging processing on the first video frame sequence and the second video frame sequence according to the first comparison result and the second comparison result.
As an alternative embodiment, the apparatus further comprises: an obtaining unit, configured to obtain a first comparison result when a maximum value of a luminance characteristic distance is smaller than a first sub-threshold before determining to perform merging processing on the first video frame sequence and the second video frame sequence according to the first comparison result and the second comparison result; and obtaining a second comparison result under the condition that the minimum value of the brightness characteristic distance is smaller than the second sub-threshold.
As an alternative embodiment, the apparatus further comprises: the first determining unit is used for determining the channel type of the current scene picture before comparing the numerical value size relationship between the characteristic maximum value and a first preset threshold value; and the second determining unit is used for determining the first sub-threshold and the second sub-threshold according to the channel type, wherein the first sub-threshold and/or the second sub-threshold are/is obtained by a channel coefficient corresponding to the channel type.
As an alternative embodiment, the apparatus further comprises: the device comprises a setting unit, a transition unit and a processing unit, wherein the setting unit is used for taking a time point of a black screen frame in all video frames of a video to be processed as a transition time point, and under the condition that a first video frame sequence corresponding to the transition time point contains a gradual change lens, a second video frame sequence corresponding to the transition time point contains a sudden change lens; or, in the case that the first video frame sequence corresponding to the transition time point includes a sudden-change shot, the second video frame sequence corresponding to the transition time point includes a gradual-change shot.
As an alternative embodiment, the manner of determining the gradient shot includes: a fourth obtaining unit, configured to obtain a first feature distance between a start frame and an end frame of the first video frame sequence or the second video frame sequence; a fifth obtaining unit, configured to obtain any two video frames in the first video frame sequence or the second video frame sequence when the first characteristic distance is greater than a second preset threshold, where the second preset threshold is obtained by calculating a characteristic distance through pairwise combination of the video frames in the first video frame sequence or the second video frame sequence, and a distance between the two video frames in the first video frame sequence or the second video frame sequence is smaller than a preset value; and a third determining unit, configured to determine a second feature distance between two video frames, and determine that the first video frame sequence or the second video frame sequence includes a gradient lens when the second feature distance is smaller than a third preset threshold, where the third preset threshold is a product of the first feature distance and a first preset coefficient, and the first preset coefficient is a value obtained by performing quantization adjustment on the first feature distance.
As an alternative embodiment, the fifth obtaining unit includes: the first obtaining module is used for obtaining two adjacent video frames in the first video frame sequence or the second video frame sequence.
As an alternative embodiment, in the fifth obtaining unit, the manner of obtaining the second preset threshold includes: the second acquisition module is used for acquiring a plurality of third characteristic distances calculated after pairwise combination of each video frame in the first video frame sequence or the second video frame sequence; the obtaining module is used for averaging the plurality of third characteristic distances to obtain a second preset threshold; or, a third obtaining module, configured to obtain a second preset coefficient, where the second preset coefficient is used to represent a weight of a changed shot included in the first video frame sequence or the second video frame sequence; and the second determining module is used for determining the product of the mean value and a second preset coefficient as a second preset threshold value.
As an alternative embodiment, the manner of determining the abrupt shot includes: a sixth obtaining unit, configured to obtain a fourth feature distance between a key video frame in the first video frame sequence or the second video frame sequence and a frame located behind or in front of the key video frame, where the key video frame is a last frame of a video clip or a black screen frame in the first video frame sequence or the second video frame sequence, where a scene picture does not exist any more when the scene picture in the first video frame sequence or the second video frame sequence changes; the comparison unit is used for comparing the fourth characteristic distance with a third preset threshold value, wherein the third preset threshold value is the product of the second preset threshold value and a channel coefficient corresponding to the current scene picture; and the fourth determining unit is used for determining that the first video frame sequence or the second video frame sequence contains the abrupt shot under the condition that the fourth characteristic distance is larger than a third preset threshold value.
According to still another aspect of embodiments of the present application, there is also provided an apparatus for video material generation that utilizes respective units and modules in an apparatus for video content processing, the apparatus for video material generation including: a seventh obtaining unit, configured to obtain a positioning identifier of the transition time point in the video to be processed; and the second merging unit is used for merging the video frame sequence positioned before the positioning identifier and the video frame sequence positioned after the positioning identifier to generate a merged video material.
According to still another aspect of the embodiments of the present application, there is also provided an electronic device, which may be a server, a terminal, or a combination thereof, for implementing the method for processing video content described above.
Fig. 5 is a block diagram of an alternative electronic device according to an embodiment of the present application, as shown in fig. 5, including a processor 501, a communication interface 502, a memory 503, and a communication bus 504, where the processor 501, the communication interface 502, and the memory 503 are communicated with each other through the communication bus 504, where,
a memory 503 for storing a computer program;
the processor 501, when executing the computer program stored in the memory 503, implements the following steps:
acquiring a transition time point of a video to be processed;
acquiring a first video frame sequence and a second video frame sequence corresponding to the transition time point, wherein the first video frame sequence comprises a video frame sequence adjacent to the left of the time sequence of the transition time point, and the second video frame sequence comprises a video frame sequence adjacent to the right of the time sequence of the transition time point;
acquiring a sequence group consisting of a first video frame sequence and a second video frame sequence, calculating a brightness characteristic distance between every two video frames in the sequence group, and determining a characteristic maximum value, wherein the characteristic maximum value comprises the maximum value of the brightness characteristic distance and the minimum value of the brightness characteristic distance;
and comparing the numerical value magnitude relation between the characteristic maximum value and a first preset threshold value, and performing merging processing on the first video frame sequence and the second video frame sequence based on the comparison result.
Alternatively, in this embodiment, the communication bus may be a PCI (Peripheral Component Interconnect) bus, an EISA (Extended Industry Standard Architecture) bus, or the like. The communication bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown in FIG. 5, but this is not intended to represent only one bus or type of bus.
The communication interface is used for communication between the electronic equipment and other equipment.
The memory may include RAM, and may also include non-volatile memory (non-volatile memory), such as at least one disk memory. Alternatively, the memory may be at least one memory device located remotely from the processor.
As an example, as shown in fig. 5, the memory 503 may include, but is not limited to, a first obtaining unit 401, a second obtaining unit 402, a third obtaining unit 403, and a first merging unit 404 in the apparatus for processing video content. In addition, the apparatus may further include, but is not limited to, other module units in the video content processing apparatus, which is not described in this example again.
The processor may be a general-purpose processor, and may include but is not limited to: a CPU (Central Processing Unit), an NP (Network Processor), and the like; but also a DSP (Digital Signal Processing), an ASIC (Application Specific Integrated Circuit), an FPGA (Field Programmable Gate Array) or other Programmable logic device, discrete Gate or transistor logic device, discrete hardware component.
In addition, the electronic device further includes: and the display is used for displaying the result of the video content processing.
Optionally, the specific examples in this embodiment may refer to the examples described in the above embodiments, and this embodiment is not described herein again.
It can be understood by those skilled in the art that the structure shown in fig. 5 is only an illustration, and the device implementing the method for processing video content may be a terminal device, and the terminal device may be a terminal device such as a smart phone (e.g., an Android phone, an iOS phone, etc.), a tablet computer, a palm computer, a Mobile Internet Device (MID), a PAD, and the like. Fig. 5 is a diagram illustrating a structure of the electronic device. For example, the terminal device may also include more or fewer components (e.g., network interfaces, display devices, etc.) than shown in FIG. 5, or have a different configuration than shown in FIG. 5.
Those skilled in the art will appreciate that all or part of the steps in the methods of the above embodiments may be implemented by a program instructing hardware associated with the terminal device, where the program may be stored in a computer-readable storage medium, and the storage medium may include: flash disk, ROM, RAM, magnetic or optical disk, and the like.
According to still another aspect of an embodiment of the present application, there is also provided a storage medium. Alternatively, in the present embodiment, the storage medium described above may be used for program codes of a method of executing video content processing.
Optionally, in this embodiment, the storage medium may be located on at least one of a plurality of network devices in a network shown in the above embodiment.
Optionally, in this embodiment, the storage medium is configured to store program code for performing the following steps:
acquiring a transition time point of a video to be processed;
acquiring a first video frame sequence and a second video frame sequence corresponding to the transition time point, wherein the first video frame sequence comprises a video frame sequence adjacent to the left of the time sequence of the transition time point, and the second video frame sequence comprises a video frame sequence adjacent to the right of the time sequence of the transition time point;
acquiring a sequence group consisting of a first video frame sequence and a second video frame sequence, calculating a brightness characteristic distance between every two video frames in the sequence group, and determining a characteristic maximum value, wherein the characteristic maximum value comprises the maximum value of the brightness characteristic distance and the minimum value of the brightness characteristic distance;
and comparing the numerical value magnitude relation between the characteristic maximum value and a first preset threshold value, and performing merging processing on the first video frame sequence and the second video frame sequence based on the comparison result.
Optionally, the specific example in this embodiment may refer to the example described in the above embodiment, which is not described again in this embodiment.
Optionally, in this embodiment, the storage medium may include, but is not limited to: various media capable of storing program codes, such as a U disk, a ROM, a RAM, a removable hard disk, a magnetic disk, or an optical disk.
According to yet another aspect of an embodiment of the present application, there is also provided a computer program product or a computer program comprising computer instructions stored in a computer readable storage medium; the processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions to cause the computer device to perform the method steps of video content processing in any of the embodiments described above.
The above-mentioned serial numbers of the embodiments of the present application are merely for description and do not represent the merits of the embodiments.
The integrated unit in the above embodiments, if implemented in the form of a software functional unit and sold or used as a separate product, may be stored in the above computer-readable storage medium. Based on such understanding, the technical solution of the present application may be substantially implemented or contributed to by the prior art, or all or part of the technical solution may be embodied in the form of a software product, stored in a storage medium, including several instructions for causing one or more computer devices (which may be personal computers, servers, or network devices, etc.) to execute all or part of the steps of the method for processing video content according to the embodiments of the present application.
In the above embodiments of the present application, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the several embodiments provided in the present application, it should be understood that the disclosed client may be implemented in other manners. The above-described embodiments of the apparatus are merely illustrative, and for example, a division of a unit is merely a division of a logic function, and an actual implementation may have another division, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, units or modules, and may be in an electrical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, and may also be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution provided in the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The foregoing is only a preferred embodiment of the present application and it should be noted that those skilled in the art can make several improvements and modifications without departing from the principle of the present application, and these improvements and modifications should also be considered as the protection scope of the present application.

Claims (13)

1. A method of video content processing, the method comprising:
acquiring a transition time point of a video to be processed;
acquiring a first video frame sequence and a second video frame sequence corresponding to the transition time point, wherein the first video frame sequence comprises a video frame sequence adjacent to the left of the time sequence of the transition time point, and the second video frame sequence comprises a video frame sequence adjacent to the right of the time sequence of the transition time point;
acquiring a sequence group consisting of the first video frame sequence and the second video frame sequence, calculating a brightness characteristic distance between every two video frames in the sequence group, and determining a characteristic maximum value, wherein the characteristic maximum value comprises the maximum value of the brightness characteristic distance and the minimum value of the brightness characteristic distance;
and comparing the numerical value magnitude relation between the characteristic maximum value and a first preset threshold value, and performing merging processing on the first video frame sequence and the second video frame sequence based on the comparison result.
2. The method of claim 1, wherein the comparing the magnitude relationship between the feature maximum and the first preset threshold value, and wherein the merging the first video frame sequence and the second video frame sequence based on the comparison comprises:
comparing the maximum value of the brightness characteristic distance with a first sub-threshold to obtain a first comparison result, wherein the first preset threshold comprises the first sub-threshold;
comparing the minimum value of the brightness characteristic distance with a second sub-threshold to obtain a second comparison result, wherein the first preset threshold comprises the second sub-threshold;
and determining to perform merging processing on the first video frame sequence and the second video frame sequence according to the first comparison result and the second comparison result.
3. The method of claim 2, wherein prior to said determining from the first comparison result and the second comparison result that the first video frame sequence is merged with the second video frame sequence, the method further comprises:
obtaining the first comparison result under the condition that the maximum value of the brightness characteristic distance is smaller than the first sub-threshold; and obtaining the second comparison result under the condition that the minimum value of the brightness characteristic distance is smaller than the second sub-threshold.
4. The method of claim 2, wherein prior to said comparing the magnitude relationship between the feature maxima and the first predetermined threshold, the method further comprises:
determining the channel type of a current scene picture;
and determining the first sub-threshold and the second sub-threshold according to the channel type, wherein the first sub-threshold and/or the second sub-threshold are/is obtained by a channel coefficient corresponding to the channel type.
5. The method of claim 1, further comprising:
taking a time point of a black screen frame existing in all video frames of the video to be processed as the transition time point, wherein under the condition that the first video frame sequence corresponding to the transition time point contains a gradual shot, the second video frame sequence corresponding to the transition time point contains a sudden shot; or, in the case that the first video frame sequence corresponding to the transition time point includes a sudden-change shot, the second video frame sequence corresponding to the transition time point includes a gradual-change shot.
6. The method of claim 5, wherein determining the manner of gradient shots comprises:
acquiring a first characteristic distance between a starting frame and an ending frame of the first video frame sequence or the second video frame sequence;
acquiring any two video frames in the first video frame sequence or the second video frame sequence under the condition that the first characteristic distance is greater than a second preset threshold, wherein the second preset threshold is obtained by calculating a characteristic distance through pairwise combination of each video frame in the first video frame sequence or the second video frame sequence, and the distance between the two video frames in the first video frame sequence or the second video frame sequence is smaller than a preset value;
determining a second characteristic distance between the two video frames, and determining that the first video frame sequence or the second video frame sequence contains the gradient lens when the second characteristic distance is smaller than a third preset threshold, wherein the third preset threshold is a product of the first characteristic distance and a first preset coefficient, and the first preset coefficient is a numerical value for performing quantization adjustment on the first characteristic distance.
7. The method of claim 6, wherein the obtaining any two video frames of the first video frame sequence or the second video frame sequence comprises:
acquiring the two adjacent video frames in the first video frame sequence or the second video frame sequence.
8. The method according to claim 6, wherein in the case where the first characteristic distance is greater than a second preset threshold, the second preset threshold is obtained from any two video frames in the first video frame sequence or the second video frame sequence by:
after acquiring pairwise combination of each video frame in the first video frame sequence or the second video frame sequence, calculating to obtain a plurality of third characteristic distances;
averaging the plurality of third characteristic distances to obtain a second preset threshold;
alternatively, the first and second electrodes may be,
acquiring a second preset coefficient, wherein the second preset coefficient is used for representing the weight of a changed shot contained in the first video frame sequence or the second video frame sequence;
and determining the product of the average value and the second preset coefficient as the second preset threshold value.
9. The method of claim 8, wherein determining the manner of abrupt shots comprises:
acquiring a fourth characteristic distance between a key video frame in the first video frame sequence or the second video frame sequence and a frame behind or in front of the key video frame, wherein the key video frame is a last frame of a video clip or a black screen frame of which a scene picture no longer exists in the first video frame sequence or the second video frame sequence when the scene picture in the first video frame sequence or the second video frame sequence changes;
comparing the fourth characteristic distance with a third preset threshold value, wherein the third preset threshold value is the product of the second preset threshold value and a channel coefficient corresponding to the current scene picture;
determining that the first video frame sequence or the second video frame sequence comprises the abrupt shot if the fourth feature distance is greater than the third preset threshold.
10. A method of video material generation, wherein the method of video content processing according to any one of claims 1 to 9 is used to perform video material generation, the method comprising:
acquiring a positioning identifier of the transition time point in the video to be processed;
and merging the video frame sequence positioned in front of the positioning identifier and the video frame sequence positioned behind the positioning identifier to generate a merged video material.
11. An apparatus for video content processing, the apparatus comprising:
the first acquisition unit is used for acquiring transition time points of the video to be processed;
a second obtaining unit, configured to obtain a first video frame sequence and a second video frame sequence corresponding to the transition time point, where the first video frame sequence includes a video frame sequence temporally left adjacent to the transition time point, and the second video frame sequence includes a video frame sequence temporally right adjacent to the transition time point;
a third obtaining unit, configured to obtain a sequence group consisting of the first video frame sequence and the second video frame sequence, calculate a luminance characteristic distance between every two video frames in the sequence group, and determine a characteristic maximum value, where the characteristic maximum value includes a maximum value of the luminance characteristic distance and a minimum value of the luminance characteristic distance;
and the first merging unit is used for comparing the numerical value magnitude relation between the characteristic maximum value and a first preset threshold value and merging the first video frame sequence and the second video frame sequence based on the comparison result.
12. An electronic device comprising a processor, a communication interface, a memory and a communication bus, wherein said processor, said communication interface and said memory communicate with each other via said communication bus,
the memory for storing a computer program;
the processor for performing the method steps of any one of claims 1 to 10 by running the computer program stored on the memory.
13. A computer-readable storage medium, in which a computer program is stored, wherein the computer program is configured to carry out the method steps of any one of claims 1 to 10 when executed.
CN202111452420.XA 2021-12-01 2021-12-01 Method and apparatus for video content processing, electronic device, and storage medium Pending CN114220048A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111452420.XA CN114220048A (en) 2021-12-01 2021-12-01 Method and apparatus for video content processing, electronic device, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111452420.XA CN114220048A (en) 2021-12-01 2021-12-01 Method and apparatus for video content processing, electronic device, and storage medium

Publications (1)

Publication Number Publication Date
CN114220048A true CN114220048A (en) 2022-03-22

Family

ID=80699240

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111452420.XA Pending CN114220048A (en) 2021-12-01 2021-12-01 Method and apparatus for video content processing, electronic device, and storage medium

Country Status (1)

Country Link
CN (1) CN114220048A (en)

Similar Documents

Publication Publication Date Title
CN110121098B (en) Video playing method and device, storage medium and electronic device
CN112800805A (en) Video editing method, system, computer device and computer storage medium
US20220172476A1 (en) Video similarity detection method, apparatus, and device
CN109344273B (en) Wallpaper management method and device and mobile terminal
CN109409321B (en) Method and device for determining lens movement mode
WO2022247849A1 (en) Multimedia data processing method and apparatus, and device and storage medium
CN110677718B (en) Video identification method and device
CN112291634A (en) Video processing method and device
CN112714338B (en) Video transmission method, video playing method, video transmission device, video playing device, computer equipment and storage medium
CN114220048A (en) Method and apparatus for video content processing, electronic device, and storage medium
CN110599525A (en) Image compensation method and apparatus, storage medium, and electronic apparatus
CN116168045A (en) Method and system for dividing sweeping lens, storage medium and electronic equipment
CN115379290A (en) Video processing method, device, equipment and storage medium
CN115269494A (en) Data archiving method and device
CN112383821B (en) Intelligent combination method and device for similar videos
CN114501060A (en) Live broadcast background switching method and device, storage medium and electronic equipment
CN113553469B (en) Data processing method, device, electronic equipment and computer storage medium
CN112381151B (en) Method and device for determining similar videos
CN114416786A (en) Stream data processing method and device, storage medium and computer equipment
US10885343B1 (en) Repairing missing frames in recorded video with machine learning
CN112101197A (en) Face information acquisition method and device
CN108600864B (en) Movie preview generation method and device
CN112487223A (en) Image processing method and device and electronic equipment
US20160105731A1 (en) Systems and methods for identifying and acquiring information regarding remotely displayed video content
CN114187545A (en) Identification method and device of gradient lens, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination