CN116156216A - Video processing method, device, electronic equipment and storage medium - Google Patents

Video processing method, device, electronic equipment and storage medium Download PDF

Info

Publication number
CN116156216A
CN116156216A CN202211605667.5A CN202211605667A CN116156216A CN 116156216 A CN116156216 A CN 116156216A CN 202211605667 A CN202211605667 A CN 202211605667A CN 116156216 A CN116156216 A CN 116156216A
Authority
CN
China
Prior art keywords
video
processed
target
processing
processing task
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211605667.5A
Other languages
Chinese (zh)
Inventor
李鸣
肖云
张奎
陈明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Future Tv Co ltd
Original Assignee
Future Tv Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Future Tv Co ltd filed Critical Future Tv Co ltd
Priority to CN202211605667.5A priority Critical patent/CN116156216A/en
Publication of CN116156216A publication Critical patent/CN116156216A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • H04N21/23418Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/06Protocols specially adapted for file transfer, e.g. file transfer protocol [FTP]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/231Content storage operation, e.g. caching movies for short term storage, replicating data over plural servers, prioritizing data for deletion
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • H04N21/2343Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • H04N21/2343Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
    • H04N21/234363Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements by altering the spatial resolution, e.g. for clients with a lower screen resolution
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • H04N21/2343Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
    • H04N21/234381Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements by altering the temporal resolution, e.g. decreasing the frame rate by frame skipping
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Multimedia (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

The application provides a video processing method, a video processing device, electronic equipment and a storage medium, and relates to the technical field of data processing. According to the method, basic information of the video to be processed is obtained through analysis, at least one target processing task corresponding to the video to be processed can be matched according to the basic information, and therefore each processing task can be sequentially executed on the video to be processed according to the execution sequence of each target processing task, and the target video meeting the release standard is obtained. Because the basic information of different videos to be processed is different and the target processing tasks are determined according to the basic information of the videos to be processed, the different videos to be processed can be individually matched with the corresponding target processing tasks, the situation that in the traditional video processing, all videos execute the same target processing tasks and the individual processing of different videos cannot be met is broken, and therefore the video processing effect is better.

Description

Video processing method, device, electronic equipment and storage medium
Technical Field
The present invention relates to the field of data processing technologies, and in particular, to a video processing method, a video processing device, an electronic device, and a storage medium.
Background
In general, when a video published by a video provider is released to a large screen end for playing, secondary processing of the video is required to be performed through the large screen end operator, so that the video is processed into a video meeting the requirement and released to the large screen end for playing.
In the prior art, the video is subjected to secondary processing through a fixed video processing flow, and the effect of video processing is poor because the processing of the fixed video processing flow is single and the personalized processing requirement cannot be met.
Disclosure of Invention
The present invention aims to provide a video processing method, a device, an electronic apparatus and a storage medium, which aim at the defects in the prior art, so as to solve the problem that the fixed video processing flow in the prior art cannot meet the personalized processing requirement.
In order to achieve the above purpose, the technical solution adopted in the embodiment of the present application is as follows:
in a first aspect, an embodiment of the present application provides a video processing method, including:
analyzing a video file to be processed, and acquiring basic information of the video to be processed;
determining at least one target processing task according to basic information of a video to be processed, and sequentially executing each processing task on the video to be processed according to the sequence of each target processing task to obtain a target video;
And releasing the target video and the attribute information bound with the target video to a terminal so that the terminal displays the target video on a display interface of the terminal according to the attribute information.
Optionally, before the analyzing the video file to be processed and obtaining the basic information of the video to be processed, the method includes:
receiving a video file to be processed transmitted by a video provider through a preset file transmission interface, or pulling the video file to be processed from a file extraction link provided by the video provider through a preset file transmission engine, wherein the preset file transmission interface comprises: the method comprises the steps of transmitting a client, a software development kit client and a browser transmission plug-in;
and storing the acquired video file to be processed into a file sharing storage system.
Optionally, the parsing the video file to be processed, and acquiring basic information of the video to be processed, includes:
monitoring a newly added video file in the file sharing storage system, and acquiring the video file to be processed;
analyzing text information contained in the video file to be processed, and acquiring basic information of the video to be processed, wherein the basic information of the video to be processed comprises: video source, video format, video resolution, video bitrate, video content classification.
Optionally, the determining at least one target processing task according to the basic information of the video to be processed, and sequentially executing each processing task on the video to be processed according to the sequence of each target processing task, to obtain a target video, includes:
screening at least one target processing task matched with the video to be processed from a preset processing task sequence according to basic information of the video to be processed, and executing each target processing task on the video to be processed according to the sequence of each target processing task in the processing task sequence to obtain the target video, wherein the processing tasks in the processing task sequence comprise: video transcoding processing, video masking processing, picture frame extraction processing and intelligent identification processing.
Optionally, the screening, according to the basic information of the video to be processed, at least one target processing task matched with the video to be processed from a preset processing task sequence includes:
determining a target processing task by using a video transcoding processing task in the processing task sequence;
if the video source of the video to be processed meets the station logo processing condition, determining a target processing task by video masking processing in the processing task sequence;
If the video content classification of the video to be processed meets the classification condition, determining the picture frame extraction processing in the processing task sequence as a target processing task;
and if the video source of the video to be processed meets a preset source and the video content classification of the video to be processed meets a preset classification, determining intelligent identification processing in the processing task sequence as a target processing task.
Optionally, the processing task includes: the video parsing process, before the target video and the attribute information bound by the target video are released to the terminal, includes:
analyzing the video to be processed, and acquiring catalogue information and label information of the video to be processed;
and binding the cataloging information and the label information of the video to be processed as the attribute information with the target video.
Optionally, the publishing the target video and the attribute information bound to the target video to a terminal includes:
and according to the performance parameters and the network parameters of each terminal to be played, the target video and the attribute information bound by the target video are issued to the corresponding terminal.
In a second aspect, embodiments of the present application further provide a video processing apparatus, including: the system comprises an acquisition module, an execution module and a release module;
the acquisition module is used for analyzing the video file to be processed and acquiring the basic information of the video to be processed;
the execution module is used for determining at least one target processing task according to the basic information of the video to be processed, and sequentially executing each processing task on the video to be processed according to the sequence of each target processing task to obtain a target video;
the issuing module is used for issuing the target video and the attribute information bound by the target video to the terminal, so that the terminal displays the target video on a display interface of the terminal according to the attribute information.
Optionally, the apparatus further comprises: a receiving module and a storage module;
the receiving module is configured to receive a video file to be processed, which is transmitted by a video provider through a preset file transmission interface, or pull the video file to be processed from a file extraction link provided by the video provider through a preset file transmission engine, where the preset file transmission interface includes: the method comprises the steps of transmitting a client, a software development kit client and a browser transmission plug-in;
The storage module is used for storing the acquired video files to be processed into the file sharing storage system.
Optionally, the acquiring module is specifically configured to monitor a video file newly added in the file sharing storage system, and acquire the video file to be processed;
analyzing text information contained in the video file to be processed, and acquiring basic information of the video to be processed, wherein the basic information of the video to be processed comprises: video source, video format, video resolution, video bitrate, video content classification.
Optionally, the executing module is specifically configured to screen, according to basic information of a video to be processed, at least one target processing task that matches the video to be processed from a preset processing task sequence, and execute, according to an order of each target processing task in the processing task sequence, each target processing task on the video to be processed, to obtain the target video, where the processing tasks in the processing task sequence include: video transcoding processing, video masking processing, picture frame extraction processing and intelligent identification processing.
Optionally, the execution module is specifically configured to determine a target processing task from the video transcoding processing task in the processing task sequence;
If the video source of the video to be processed meets the station logo processing condition, determining a target processing task by video masking processing in the processing task sequence;
if the video content classification of the video to be processed meets the classification condition, determining the picture frame extraction processing in the processing task sequence as a target processing task;
and if the video source of the video to be processed meets a preset source and the video content classification of the video to be processed meets a preset classification, determining intelligent identification processing in the processing task sequence as a target processing task.
Optionally, the apparatus further comprises: a binding module;
the acquisition module is also used for analyzing the video to be processed and acquiring catalogue information and label information of the video to be processed;
and the binding module is used for binding the catalogue information and the label information of the video to be processed as the attribute information with the target video.
Optionally, the publishing module is specifically configured to publish the target video and the attribute information bound to the target video to the corresponding terminal according to the performance parameter and the network parameter of each terminal to be played.
In a third aspect, an embodiment of the present application provides an electronic device, including: a processor, a storage medium, and a bus, the storage medium storing machine-readable instructions executable by the processor, the processor and the storage medium communicating over the bus when the electronic device is operating, the processor executing the machine-readable instructions to perform the steps of the video processing method as provided in the first aspect when executed.
In a fourth aspect, embodiments of the present application provide a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of the video processing method as provided in the first aspect.
The beneficial effects of this application are:
the method can match at least one target processing task corresponding to the video to be processed according to the basic information, so that each processing task can be sequentially executed on the video to be processed according to the execution sequence of each target processing task, and the target video meeting the release standard is obtained. Because the basic information of different videos to be processed is different and the target processing tasks are determined according to the basic information of the videos to be processed, the different videos to be processed can be individually matched with the corresponding target processing tasks, the situation that in the traditional video processing, all videos execute the same target processing tasks and the individual processing of different videos cannot be met is broken, and therefore the video processing effect is better.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the embodiments will be briefly described below, it being understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered limiting the scope, and that other related drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic architecture diagram of a video processing system according to an embodiment of the present application;
fig. 2 is a flowchart of a video processing method according to an embodiment of the present application;
fig. 3 is a second flowchart of a video processing method according to an embodiment of the present application;
fig. 4 is a flowchart illustrating a video processing method according to an embodiment of the present application;
fig. 5 is a flowchart of a video processing method according to an embodiment of the present application;
fig. 6 is a flowchart fifth of a video processing method according to an embodiment of the present application;
fig. 7 is a schematic diagram of a video processing apparatus according to an embodiment of the present application;
fig. 8 is a schematic diagram of an electronic device according to an embodiment of the present application.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present application more clear, the technical solutions of the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present application, and it should be understood that the accompanying drawings in the present application are only for the purpose of illustration and description, and are not intended to limit the protection scope of the present application. In addition, it should be understood that the schematic drawings are not drawn to scale. A flowchart, as used in this application, illustrates operations implemented according to some embodiments of the present application. It should be understood that the operations of the flow diagrams may be implemented out of order and that steps without logical context may be performed in reverse order or concurrently. Moreover, one or more other operations may be added to the flow diagrams and one or more operations may be removed from the flow diagrams as directed by those skilled in the art.
In addition, the described embodiments are only some, but not all, of the embodiments of the present application. The components of the embodiments of the present application, which are generally described and illustrated in the figures herein, may be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present application, as provided in the accompanying drawings, is not intended to limit the scope of the application, as claimed, but is merely representative of selected embodiments of the application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the present application without making any inventive effort, are intended to be within the scope of the present application.
It should be noted that the term "comprising" will be used in the embodiments of the present application to indicate the presence of the features stated hereinafter, but not to exclude the addition of other features.
Fig. 1 is a schematic architecture diagram of a video processing system according to an embodiment of the present application, where, as shown in fig. 1, the video processing system may include: the system comprises a data access module, a data storage module, a video processing module and a content operation module, wherein the modules can be respectively deployed on different servers, or when the processing performance of the servers is strong, the modules can be deployed on the same server, and the steps of the method can be respectively matched and processed among the different modules.
Wherein, the data access module may comprise: the uploading module can provide a file transmission interface to the outside so as to receive video files uploaded by a video provider through the provided file transmission interface; the download module may automatically pull the video file from the file extraction link provided by the video provider via the file transfer engine.
The data access module may store video files obtained from video providers in the data storage module, with video files from different video providers each stored in the data storage module.
The video processing module may include: video transcoding service, video masking service, picture frame extraction service and intelligent identification service; the video processing module can monitor file storage actions in the data storage module in real time, immediately acquire a video file stored at the current moment after the video file is stored in the data storage module, match services corresponding to the video from a plurality of processing services according to basic information of the video file obtained through analysis, and execute the services in sequence to finish secondary processing of the video.
The content operation module is used for binding the catalogue information and the label information of the processed video and pushing the video to the large screen end for display after the binding is completed.
The method provides a set of perfect video processing flow, and can match personalized processing services corresponding to the video to be processed from the video processing flow according to the basic information of the video to be processed, so that the personalized processing of the video under different processing requirements is realized, and the video processing effect is improved.
The following describes the steps of the present solution in detail by means of specific embodiments, and different steps may be correspondingly performed by different modules in the video processing system.
Fig. 2 is a flowchart of a video processing method according to an embodiment of the present application; as shown in fig. 2, the method may include:
s101, analyzing a video file to be processed, and acquiring basic information of the video to be processed.
The video file to be processed may be provided by a video provider, which may refer to a publisher of the video. Typically, the video provider provides a video file, and the video file may include the video and the basic information of the video, and by parsing the video file to be processed, the basic information of the video to be processed may be obtained from the video file.
S102, determining at least one target processing task according to the basic information of the video to be processed, and sequentially executing the processing tasks on the video to be processed according to the sequence of the target processing tasks to obtain the target video.
In this embodiment, at least one corresponding target processing task may be matched for the video to be processed according to the basic information of the video to be processed, and since the basic information is parsed from the video file to be processed, the basic information of different videos to be processed is different, and the target processing tasks are determined according to the basic information of the video to be processed, so that different videos to be processed may be individually matched to the corresponding target processing tasks, which breaks through the conventional video processing, all videos execute the same target processing task, and cannot satisfy the individual processing of different videos.
Optionally, based on the determined target processing tasks, each target processing task may be sequentially executed on the video to be processed according to the execution sequence of each target processing task, so as to obtain a target video.
It should be noted that, in the case of determining the target processing tasks, the execution order of each target processing task may be determined, and when a new target processing task is designed, the execution order of the new target processing task may be located between any target processing tasks, thereby causing the execution order of each target processing task to change. Specifically, the number, type, and execution order of the target processing tasks included in the video processing system may be continuously adjusted along with the update optimization of the system.
And S103, distributing the target video and the attribute information bound with the target video to the terminal so that the terminal displays the target video on a display interface of the terminal according to the attribute information.
After the video to be processed is processed for the second time in the mode, the obtained target video basically accords with the standard capable of being released and displayed, and the target video and the attribute information bound with the target video can be released to the terminal, so that the terminal can play the target video on the display interface according to the attribute information of the target video.
The attribute information of the target video may indicate typesetting information when the target video is displayed, that is, when the target video is displayed on the display interface, the target video may be displayed according to the typesetting mode indicated by the bound attribute information.
Alternatively, the terminal herein may refer to a large screen end other than a mobile phone, for example: projector, television box, etc.
In summary, the present embodiment provides a video processing method, by analyzing and obtaining basic information of a video to be processed, at least one target processing task corresponding to the video to be processed may be matched according to the basic information, so that each processing task may be sequentially executed on the video to be processed according to an execution sequence of each target processing task, to obtain a target video meeting a release standard. Because the basic information of different videos to be processed is different and the target processing tasks are determined according to the basic information of the videos to be processed, the different videos to be processed can be individually matched with the corresponding target processing tasks, the situation that in the traditional video processing, all videos execute the same target processing tasks and the individual processing of different videos cannot be met is broken, and therefore the video processing effect is better.
Fig. 3 is a second flowchart of a video processing method according to an embodiment of the present application; optionally, in step S101, the parsing the video file to be processed, before obtaining the basic information of the video to be processed, may include:
s201, receiving a video file to be processed transmitted by a video provider through a preset file transmission interface, or pulling the video file to be processed from a file extraction link provided by the video provider through a preset file transmission engine, wherein the preset file transmission interface comprises: the system comprises a transmission client, a software development kit client and a browser transmission plug-in.
Generally, the file transmission manner can be broadly divided into two types: first kind: the technology docking mode is used for directly transmitting the video file to the video processing system through ftp, http and other modes according to the technology open standard access system provided by the video processing system for the video provider with weak business capability. The video provider can provide a docking standard to the outside in terms of business capability, so that the video processing system can acquire video files from the video provider system, but because the docking of each video provider does not have a uniform standard, the new docking needs to consume manpower and material resources each time. Second kind: the manual butt joint transmission mode is realized by the offline communication of operators of both sides and the direct transmission through instant messaging software. Through the shared links such as the cloud disk, such as the Ali network disk, the hundred-degree network disk and the like, the mobile hard disk is downloaded by the operator, and the mobile hard disk is mailed and received by the operator for self processing.
In this embodiment, two sub-modules may be configured in the data access module of the video processing system to satisfy file transfer communications with different video providers. The method comprises the following steps: download module (active pull) and upload module (passive receive).
Optionally, the downloading module is internally packaged with downloading capability compatible with various formats and transmission modes, and video file formats commonly used in the market, such as ts/mp 4/hls/and the like, can be automatically pulled by a self-developed high-speed transmission engine as long as paths are available. Meanwhile, the self-grinding extraction plug-in can support automatic extraction of files in a plurality of video websites and network disk applications and automatically pull the files into the data storage module. The video provider can send the video file extraction link through the network disk client, the HTTP link and the like, and the downloading module can automatically pull and acquire the video file according to the file extraction link provided by the video provider.
The uploading module can comprise a set of transmission server side for receiving, checking, merging and the like of file information streams, and externally provides a self-developed transmission client side/SDK/browser transmission plug-in, so that various transmissions are abandoned, and the user operation is facilitated as much as possible. And the file transmission efficiency of the user is improved based on a self-developed high-speed transmission engine (adopting UDP protocol and slice transmission). In this way, the user can send the video file through the file transfer interfaces such as the FTP client, the self-research SDK client, and the like.
By the two file acquisition modes provided by the method, the file transmission requirements of any video provider can be almost met.
S202, storing the acquired video file to be processed into a file sharing storage system.
Alternatively, video files transmitted by different video providers may all be stored immediately into a file-sharing storage system, which may be a specific example of the data storage module described above, to await secondary processing.
Fig. 4 is a flowchart illustrating a video processing method according to an embodiment of the present application; optionally, in step S101, the parsing the video file to be processed to obtain the basic information of the video to be processed may include:
s401, monitoring newly added video files in the file sharing storage system, and obtaining the video files to be processed.
In some embodiments, the video processing module in the video processing system may monitor the file storage action of the data storage module in real time, and when the video file to be processed uploaded by the video provider is completely stored in the file sharing storage system, the video file to be processed is immediately monitored by the video processing module, so as to obtain the video to be processed.
When a video file is stored in the file sharing storage system, that is, after a new video file is added in the file sharing storage system, the video file is immediately monitored by the video processing module, so that the video file is obtained as a video file to be processed currently.
S402, analyzing text information contained in a video file to be processed, and acquiring basic information of the video to be processed, wherein the basic information of the video to be processed comprises: video source, video format, video resolution, video bitrate, video content classification.
Optionally, the video file to be processed may be parsed to obtain text information contained in the video file to be processed, where the text information is used as basic information of the video to be processed. The text information contained in the video file to be processed may be edited by the video provider and carried in the video file.
In this embodiment, the basic information of the video to be processed may include, but is not limited to, the following several information: video source, video format, video resolution, video bitrate, video content classification.
The video source may be, for example: information sources, etc., the video content classification may be, for example: variety, sports, education, etc.
Optionally, in step S102, at least one target processing task is determined according to the basic information of the video to be processed, and each processing task is sequentially executed on the video to be processed according to the sequence of each target processing task, so as to obtain a target video, which may include: screening at least one target processing task matched with the video to be processed from a preset processing task sequence according to the basic information of the video to be processed, and executing each target processing task on the video to be processed according to the sequence of each target processing task in the processing task sequence to obtain the target video, wherein the processing tasks in the processing task sequence comprise: video transcoding processing, video masking processing, picture frame extraction processing and intelligent identification processing.
In one implementation manner, the video processing module may include a preset processing task sequence, where the processing task sequence may include a preset plurality of processing tasks sequentially executed according to a sequence, and after the video processing module parses the basic information of the video to be processed, the video processing module may screen at least one target processing task matched with the video to be processed from the processing task sequence according to the basic information.
Based on the matched target processing tasks, the video to be processed can be subjected to secondary processing by different target processing tasks sequentially according to the task execution sequence.
In one mode, each target processing task can be matched at one time according to the basic information, each target processing task is formed into a complete processing flow according to the execution sequence of each target processing task, the video to be processed is used as input data of the processing flow, and the output of the processing flow is the target video.
In another mode, each target processing task can be sequentially matched according to the basic information, for example, a first target processing task is matched and executed to obtain a first processing result, a second target processing task is matched, the first processing result is input into the second target processing task to be processed to obtain a second processing result, and the like until the last target processing task is matched and executed to obtain the target video.
Optionally, the processing tasks in the processing task sequence in the present embodiment include, but are not limited to: video transcoding processing, video masking processing, picture frame extraction processing and intelligent identification processing.
When the types and the number of the processing tasks are changed, the execution sequence of each processing task is also changed, so that more processing tasks can be flexibly added according to the actual video processing requirements, and the execution sequence of each processing task can be flexibly adjusted.
Fig. 5 is a flowchart of a video processing method according to an embodiment of the present application; optionally, in the step, screening, according to the basic information of the video to be processed, at least one target processing task matched with the video to be processed from a preset processing task sequence may include:
s501, determining a target processing task by transcoding the video in the processing task sequence.
Optionally, for any video to be processed, transcoding is first required, and the video transcoding task in the processing task sequence may be determined as a target processing task.
Video transcoding provides support for currently popular multi-screen interactive video-on-demand and programming services. The proprietary encoding and decoding technology enables the video transcoding process in the embodiment to be compatible with all mainstream file formats on the market, and simultaneously supports the transcoding of DVD/BD (digital versatile disc) optical disc formats, mainstream video camera formats and mainstream non-encoding formats, and the ultra-high transcoding speed and the ultra-high throughput transcoding efficiency are far ahead of the industry. The high-speed transcoding, full-format support, low code rate and high image quality, 4K/8K HDR, automatic batch transcoding and other large amount of technical schemes can improve the content production efficiency of enterprises, maximize the media value of the content, and can cover media terminals such as computer screens, television screens, mobile phone screens, flat plates, advertisement special screens and other network channels such as wired, wireless and 4G/5G mobile Internet.
S502, if the video source of the video to be processed meets the station logo processing condition, determining a target processing task through video masking processing in the processing task sequence.
Optionally, whether the video source meets the station logo processing condition can be judged according to the video source in the basic information of the video to be processed, if so, the video masking processing in the processing task sequence can be determined as a target processing task.
The video logo hiding process can provide dynamic recognition logo including logo recognition, dynamic recognition, real-time logo removal, restore the covered part of the picture, and realize intelligent image filling. Real-time watermarking of direct (carousel) channels, the watermark comprising NewTVlogo or co-branding logo. And removing the content captions, and simultaneously restoring the pictures of the removed captions, wherein the intelligent captions are covered.
S503, if the video content classification of the video to be processed meets the classification condition, determining the picture frame extraction processing in the processing task sequence as a target processing task.
Whether the video classification to be processed meets the classification condition can be judged according to the video content classification in the basic information of the video to be processed, and if the video classification to be processed meets the classification condition, the picture frame extraction processing in the processing task sequence is determined to be a target processing task.
The picture frame extraction processing can be developed secondarily based on an open-source video processing tool FFmpeg and an open-source picture processing frame thumb, and the independent video picture processing tool is packaged to extract the picture frame and automatically cut and compress the picture frame to be used for showing a seek picture when the player fast forwards, so that a video cover poster is acquired.
S504, if the video source of the video to be processed meets the preset source and the video content classification of the video to be processed meets the preset classification, determining the intelligent recognition processing in the processing task sequence as a target processing task.
According to the video source and the video content classification in the basic information of the video to be processed, whether the video to be processed simultaneously meets the preset source and the preset classification can be judged, and if so, the intelligent identification processing in the processing task sequence can be determined as a target processing task.
The smart identification process may consist of some basic algorithms: the method comprises basic algorithms (face recognition, face extraction in videos and pictures, face identity recognition through comparison), OCR (text extraction in videos and pictures), ASR (conversion of voice into text), NLP (natural language processing, text semantic recognition, keyword extraction and the like). The system can preset the label library (comprising classification and label definition) according to different industry scene versions.
The intelligent identification processing can be used for intelligently analyzing the video content of an untrusted source, matching a sensitive character library, identifying illegal content in the video and processing the illegal content.
In addition, the intelligent identification process can be used for identifying and acquiring any information from the video to be processed, for example, for a television series, a segment of a target person can be identified from each television series, so that the intelligent identification process can be used for only watching the relevant segment of the target person when watching the television series. Or may identify actor information in the video for presentation in a cast, etc.
In one implementation manner, the target processing tasks determined in the steps S501 to S504 may be sequentially executed to perform secondary processing on the video to be processed.
First: the video transcoding process can be performed on the video to be processed according to the video format, the video resolution and the video code rate in the basic information of the video to be processed, the video to be processed is transcoded into a first processed video under at least one standard format, and each standard format corresponds to different video formats, video resolutions and video code rates.
In general, the video transcoding process may set a number of different standard formats when performing video transcoding operations, such as: the 4M code rate-TS format, 1.8M code rate-HLS, according to different standard formats, can transcode one video to be processed into a first processed video in a plurality of different standard formats. That is, one video to be processed may be processed into a plurality of first processed videos corresponding to the number of standard formats according to the set number of standard formats.
Second,: based on the first processed video under each standard format, video masking processing can be performed on the first processed video under each standard format according to the video source of the video to be processed, so as to obtain each second processed video.
When the video source of the video to be processed meets the station caption processing conditions, namely the set video source needing station caption processing, the station caption processing can be performed.
The video mark masking processing may be deleting the station mark in the first processed video under each standard format, or replacing the station mark with a preset mark.
According to the corresponding relation between the pre-created video source and the station caption attribute, the display position of the station caption is determined from the first processed video in each standard format, and then the station caption in the display position is processed correspondingly.
Of course, the video masking process is not only to process the station logo, but also to process some offending marks in the video.
Third,: based on each second processing video, picture frame extraction processing can be performed on each second processing video according to the video content classification of the video to be processed, so as to obtain each third processing video.
The picture frame extraction processing in the embodiment can be used for extracting the thumbnail during video fast forward and extracting the video poster.
When used for video poster extraction, key frames can be extracted from the second processed video as the poster cover of the video, regardless of the video content classification of the video to be processed.
When the method is used for extracting the thumbnail during video fast forwarding, whether the video content classification of the video to be processed meets the classification condition can be judged, and when the classification condition is met, the thumbnail is extracted from the second processed video.
For example: when video content is classified as: and when the types of television drama, movies, variety and the like are met, extracting the thumbnail from the second processed video.
Optionally, frame images at some moments in the video can be extracted according to preset time intervals, and automatic shearing compression is performed on the extracted frame images at all moments to be used as seek diagram display when the player fast forwards.
And for video content is classified as: and when the types of information, news and the like are detected, if the classification of the video to be processed does not meet the conditions, extracting the thumbnail from the second processed video is not carried out.
Fourth,: based on the third processed videos, intelligent recognition processing can be performed on the third processed videos according to video sources and video content classifications of the videos to be processed, and target videos in standard formats can be generated.
In this embodiment, the credibility of the video to be processed may be determined by combining the video source and the video content classification, and when the source of the video to be processed is not credible and belongs to the unsafe type of video, the content corresponding to the tag information may be identified from the third processed video according to the corresponding relationship between the pre-created video content classification and the tag information, and the identified content may be processed.
For example: aiming at the video of a power supply or a third party untrusted provider, the corresponding tag information of the video to be processed can be determined according to the content classification of the video, and the tag information is assumed to be: the a character can be identified from the third processed video and deleted from the third processed video.
Taking a specific example as an example:
after the video to be processed enters a video processing module, transcoding into two paths of outputs (4M code rate-TS format, 1.8M code rate-HLS) through video transcoding processing to obtain a first processed video under each standard format; after transcoding is completed, as the video source of the video to be processed is 'XX source', the station logo processing conditions are met, and station logo processing is needed, the station logos to be processed in each first processed video are removed by video masking processing, and each second processed video is obtained; when the content of the video to be processed is classified as a television play, picture frame extraction is carried out on each second processed video by picture frame extraction processing, and the second processed video is automatically cut and compressed to be used as a seek diagram display when a player fast forwards, so that each third processed video is obtained; and because the video to be processed is an information video, the intelligent identification processing automatically identifies the sensitive person in each third processed video, deletes the identified target person from each third processed video, and finally generates the target video under each standard format.
Optionally, in some embodiments, the video processing system may further include a content technical module, where the target video processed by the video processing module may be reviewed again by the content technical module, and the content technical module may encode and analyze a frame of the content of the target video after the production processing is completed, extract suspicious problem points (such as a screen, a static frame, a frame skip, a pop, and an audio-video asynchronism) and score, and be used to assist an operator in manually confirming the publishing link.
When the technical scoring result of the content technical module is lower, an operator can cancel the release of the target video and the binding attribute information thereof to the terminal. Of course, since some of the screen, static frames, etc. are invisible to the eyes of the user, normal release can be continued without considering the technical scoring result.
Fig. 6 is a flowchart fifth of a video processing method according to an embodiment of the present application; optionally, the processing task may further include: in the video parsing process, before publishing the target video and the attribute information bound to the target video to the terminal in step S103, the method may include:
s601, analyzing the video to be processed, and acquiring catalogue information and label information of the video to be processed.
The basic information of the video may be edited by a video provider and added to a video file, and the catalogue information may be obtained by parsing the video, where the catalogue information may include: director information for the video, actor information, video profile, age of the video, etc., which may also be edited in part by the video provider and in part by parsing the video.
The characters, articles and the like in the video can be provided with tag information, the names, the sexes, the types of the articles and the like of the characters can be used as the tag information of the video, and the content classification and the like of the video can also be used as the tag information.
S602, binding cataloging information and label information of the video to be processed as attribute information with the target video.
Typically, when displaying video on an interface, the video may be displayed according to a certain typesetting, for example: the video can show the poster cover, the name of the main actor can be shown in the poster, a series of catalogue information such as brief introduction, shooting time and the like of the video can be shown at a preset position below the cover, and the video can be shown under the corresponding classification according to the content classification in the label information. For example: variety, movies, television shows, etc.
Optionally, the attribute information of the target video, that is, the catalogue information and the label information obtained by parsing, can determine the display and typesetting of the target video by binding the attribute information of the target video and the attribute information of the target video.
Optionally, in step S103, publishing the target video and the attribute information bound to the target video to the terminal may include: and according to the performance parameters and the network parameters of each terminal to be played, the target video and the attribute information bound by the target video are issued to the corresponding terminal.
Based on the foregoing embodiments, a video to be processed may be processed into target video in a plurality of standard formats to satisfy the presentation at different large screen ends.
Optionally, the video format matched with the terminal can be determined according to the performance parameter and the network parameter of the terminal, so that the target video in the standard format matched with the terminal is sent to the corresponding terminal for display.
The performance parameters of the terminal may include: video code rate formats supported by the terminal, resolution, etc., for example: some terminals can support a 4K playing mode of video, while some terminals do not support, some terminals can support ultra-high definition playing of video, while some terminals only support high definition playing and the like.
The network parameters may include parameters such as network speed size, network stability, etc.
The video formats matched with the terminals can be determined by combining the performance parameters and the network parameters of the terminals, so that the target video has good display effect on different terminals.
In summary, the embodiment of the application provides a video processing method, by analyzing and acquiring basic information of a video to be processed, at least one target processing task corresponding to the video to be processed can be matched according to the basic information, so that each processing task can be sequentially executed on the video to be processed according to the execution sequence of each target processing task, and a target video meeting the release standard is obtained. Because the basic information of different videos to be processed is different and the target processing tasks are determined according to the basic information of the videos to be processed, the different videos to be processed can be individually matched with the corresponding target processing tasks, the situation that in the traditional video processing, all videos execute the same target processing tasks and the individual processing of different videos cannot be met is broken, and therefore the video processing effect is better.
The following describes a device, an apparatus, a storage medium, etc. for executing the video processing method provided in the present application, and specific implementation processes and technical effects of the device and the apparatus are referred to above, which are not described in detail below.
Fig. 7 is a schematic diagram of a video processing apparatus according to an embodiment of the present application, where functions implemented by the video processing apparatus correspond to steps executed by the above method. The device can be understood as a server cluster, and the video processing system of the method can be integrated and deployed with the server cluster. As shown in fig. 7, the apparatus may include: acquisition module 710, execution module 720, publishing module 730;
the obtaining module 710 is configured to parse the video file to be processed, and obtain basic information of the video to be processed;
the execution module 720 is configured to determine at least one target processing task according to the basic information of the video to be processed, and sequentially execute each processing task on the video to be processed according to the sequence of each target processing task, so as to obtain a target video;
and the publishing module 730 is configured to publish the target video and the attribute information bound to the target video to the terminal, so that the terminal displays the target video on a display interface of the terminal according to the attribute information.
Optionally, the apparatus further comprises: a receiving module and a storage module;
the receiving module is used for receiving the video file to be processed transmitted by the video provider through a preset file transmission interface, or pulling the video file to be processed from a file extraction link provided by the video provider through a preset file transmission engine, and the preset file transmission interface comprises: the method comprises the steps of transmitting a client, a software development kit client and a browser transmission plug-in;
And the storage module is used for storing the acquired video file to be processed into the file sharing storage system.
Optionally, the obtaining module 710 is specifically configured to monitor a newly added video file in the file sharing storage system, and obtain a video file to be processed;
analyzing text information contained in a video file to be processed, and acquiring basic information of the video to be processed, wherein the basic information of the video to be processed comprises: video source, video format, video resolution, video bitrate, video content classification.
Optionally, the executing module 720 is specifically configured to screen at least one target processing task matching with the video to be processed from a preset processing task sequence according to basic information of the video to be processed, and execute each target processing task on the video to be processed according to an order of each target processing task in the processing task sequence, so as to obtain a target video, where the processing tasks in the processing task sequence include: video transcoding processing, video masking processing, picture frame extraction processing and intelligent identification processing.
Optionally, the execution module 720 is specifically configured to determine a target processing task from the video transcoding processing task in the processing task sequence;
If the video source of the video to be processed meets the station logo processing condition, determining a target processing task by video masking processing in the processing task sequence;
if the video content classification of the video to be processed meets the classification condition, determining the picture frame extraction processing in the processing task sequence as a target processing task;
if the video source of the video to be processed meets the preset source and the video content classification of the video to be processed meets the preset classification, determining the intelligent identification processing in the processing task sequence as a target processing task.
Optionally, the apparatus further comprises: a binding module;
the obtaining module 710 is further configured to parse the video to be processed, and obtain catalogue information and label information of the video to be processed;
and the binding module is used for binding the catalogue information and the label information of the video to be processed as attribute information with the target video.
Optionally, the publishing module 730 is specifically configured to publish the target video and the attribute information bound to the target video to the corresponding terminal according to the performance parameter and the network parameter of each terminal to be played.
The foregoing apparatus is used for executing the method provided in the foregoing embodiment, and its implementation principle and technical effects are similar, and are not described herein again.
The above modules may be one or more integrated circuits configured to implement the above methods, for example: one or more application specific integrated circuits (Application Specific Integrated Circuit, abbreviated as ASIC), or one or more microprocessors (digital singnal processor, abbreviated as DSP), or one or more field programmable gate arrays (Field Programmable Gate Array, abbreviated as FPGA), or the like. For another example, when a module above is implemented in the form of a processing element scheduler code, the processing element may be a general-purpose processor, such as a central processing unit (Central Processing Unit, CPU) or other processor that may invoke the program code. For another example, the modules may be integrated together and implemented in the form of a system-on-a-chip (SOC).
The modules may be connected or communicate with each other via wired or wireless connections. The wired connection may include a metal cable, optical cable, hybrid cable, or the like, or any combination thereof. The wireless connection may include a connection through a LAN, WAN, bluetooth, zigBee, or NFC, or any combination thereof. Two or more modules may be combined into a single module, and any one module may be divided into two or more units. It will be clearly understood by those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described system and apparatus may refer to corresponding procedures in the method embodiments, which are not described in detail in this application.
Fig. 8 is a schematic diagram of an electronic device provided in an embodiment of the present application, where the electronic device may be integrated by a plurality of servers to implement an integrated deployment of a video processing system.
The apparatus may include: a processor 801, and a storage medium 802.
The storage medium 802 is used to store a program, and the processor 801 calls the program stored in the storage medium 802 to execute the above-described method embodiment. The specific implementation manner and the technical effect are similar, and are not repeated here.
Therein, the storage medium 802 stores program code that, when executed by the processor 801, causes the processor 801 to perform various steps in the methods according to various exemplary embodiments of the present application described in the above section of the description of the exemplary methods.
The processor 801 may be a general purpose processor such as a Central Processing Unit (CPU), digital signal processor (Digital Signal Processor, DSP), application specific integrated circuit (Application Specific Integrated Circuit, ASIC), field programmable gate array (Field Programmable Gate Array, FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, and may implement or perform the methods, steps, and logic blocks disclosed in embodiments of the present application. The general purpose processor may be a microprocessor or any conventional processor or the like. The steps of a method disclosed in connection with the embodiments of the present application may be embodied directly in a hardware processor for execution, or in a combination of hardware and software modules in the processor for execution.
The storage medium 802 is a non-volatile computer-readable storage medium that can be used to store non-volatile software programs, non-volatile computer-executable programs, and modules. The storage medium may include at least one type of storage medium, and may include, for example, flash Memory, a hard disk, a multimedia card, a card-type storage medium, a random access storage medium (Random Access Memory, RAM), a static random access storage medium (Static Random Access Memory, SRAM), a programmable Read-Only storage medium (Programmable Read Only Memory, PROM), a Read-Only storage medium (ROM), a charged erasable programmable Read-Only storage medium (Electrically Erasable Programmable Read-Only storage), a magnetic storage medium, a magnetic disk, an optical disk, and the like. A storage medium is any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer, but is not limited to such. The storage medium 802 in the embodiments of the present application may also be a circuit or any other device capable of implementing a storage function, for storing program instructions and/or data.
Optionally, the present application also provides a program product, such as a computer readable storage medium, comprising a program for performing the above-described method embodiments when being executed by a processor.
In the several embodiments provided in this application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of the units is merely a logical function division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in each embodiment of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in hardware plus software functional units.
The integrated units implemented in the form of software functional units described above may be stored in a computer readable storage medium. The software functional unit is stored in a storage medium, and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) or a processor (english: processor) to perform part of the steps of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: u disk, mobile hard disk, read-Only Memory (ROM), random access Memory (Random Access Memory, RAM), magnetic disk or optical disk, etc.

Claims (10)

1. A video processing method, comprising:
analyzing a video file to be processed, and acquiring basic information of the video to be processed;
determining at least one target processing task according to basic information of a video to be processed, and sequentially executing each processing task on the video to be processed according to the sequence of each target processing task to obtain a target video;
and releasing the target video and the attribute information bound with the target video to a terminal so that the terminal displays the target video on a display interface of the terminal according to the attribute information.
2. The method according to claim 1, wherein the parsing the video file to be processed, before obtaining the basic information of the video to be processed, includes:
receiving a video file to be processed transmitted by a video provider through a preset file transmission interface, or pulling the video file to be processed from a file extraction link provided by the video provider through a preset file transmission engine, wherein the preset file transmission interface comprises: the method comprises the steps of transmitting a client, a software development kit client and a browser transmission plug-in;
and storing the acquired video file to be processed into a file sharing storage system.
3. The method according to claim 2, wherein the parsing the video file to be processed to obtain the basic information of the video to be processed includes:
monitoring a newly added video file in the file sharing storage system, and acquiring the video file to be processed;
analyzing text information contained in the video file to be processed, and acquiring basic information of the video to be processed, wherein the basic information of the video to be processed comprises: video source, video format, video resolution, video bitrate, video content classification.
4. The method according to claim 1, wherein determining at least one target processing task according to the basic information of the video to be processed, and sequentially executing each processing task on the video to be processed according to the sequence of each target processing task, to obtain the target video, includes:
screening at least one target processing task matched with the video to be processed from a preset processing task sequence according to basic information of the video to be processed, and executing each target processing task on the video to be processed according to the sequence of each target processing task in the processing task sequence to obtain the target video, wherein the processing tasks in the processing task sequence comprise: video transcoding processing, video masking processing, picture frame extraction processing and intelligent identification processing.
5. The method according to claim 4, wherein the screening at least one target processing task matching the video to be processed from a preset processing task sequence according to basic information of the video to be processed includes:
determining a target processing task by using a video transcoding processing task in the processing task sequence;
if the video source of the video to be processed meets the station logo processing condition, determining a target processing task by video masking processing in the processing task sequence;
If the video content classification of the video to be processed meets the classification condition, determining the picture frame extraction processing in the processing task sequence as a target processing task;
and if the video source of the video to be processed meets a preset source and the video content classification of the video to be processed meets a preset classification, determining intelligent identification processing in the processing task sequence as a target processing task.
6. The method of claim 1, wherein the processing task comprises: the video parsing process, before the target video and the attribute information bound by the target video are released to the terminal, includes:
analyzing the video to be processed, and acquiring catalogue information and label information of the video to be processed;
and binding the cataloging information and the label information of the video to be processed as the attribute information with the target video.
7. The method according to claim 1, wherein the publishing the target video and the attribute information bound to the target video to a terminal comprises:
and according to the performance parameters and the network parameters of each terminal to be played, the target video and the attribute information bound by the target video are issued to the corresponding terminal.
8. A video processing apparatus, comprising: the system comprises an acquisition module, an execution module and a release module;
the acquisition module is used for analyzing the video file to be processed and acquiring the basic information of the video to be processed;
the execution module is used for determining at least one target processing task according to the basic information of the video to be processed, and sequentially executing each processing task on the video to be processed according to the sequence of each target processing task to obtain a target video;
the issuing module is used for issuing the target video and the attribute information bound by the target video to the terminal, so that the terminal displays the target video on a display interface of the terminal according to the attribute information.
9. An electronic device, comprising: a processor, a storage medium and a bus, the storage medium storing program instructions executable by the processor, the processor and the storage medium communicating via the bus when the electronic device is running, the processor executing the program instructions to perform the steps of the video processing method according to any one of claims 1 to 7 when executed.
10. A computer-readable storage medium, characterized in that the storage medium has stored thereon a computer program which, when executed by a processor, performs the steps of the video processing method according to any of claims 1 to 7.
CN202211605667.5A 2022-12-14 2022-12-14 Video processing method, device, electronic equipment and storage medium Pending CN116156216A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211605667.5A CN116156216A (en) 2022-12-14 2022-12-14 Video processing method, device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211605667.5A CN116156216A (en) 2022-12-14 2022-12-14 Video processing method, device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN116156216A true CN116156216A (en) 2023-05-23

Family

ID=86339896

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211605667.5A Pending CN116156216A (en) 2022-12-14 2022-12-14 Video processing method, device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN116156216A (en)

Similar Documents

Publication Publication Date Title
CN108401192B (en) Video stream processing method and device, computer equipment and storage medium
US8468130B2 (en) Assisted hybrid mobile browser
EP2901372B1 (en) Using digital fingerprints to associate data with a work
CN103188522B (en) Method and system for providing and delivering a composite condensed stream
US8514931B2 (en) Method of providing scalable video coding (SVC) video content with added media content
CN107634930B (en) Method and device for acquiring media data
CN106134146A (en) Process continuous print multicycle content
EP2151970A1 (en) Processing and supplying video data
CN105516736B (en) Video file processing method and device
US9591379B2 (en) Systems and methods for providing an advertisement calling proxy server
KR20150083355A (en) Augmented media service providing method, apparatus thereof, and system thereof
WO2014103123A1 (en) Device, method, and program for digest generation
CN105898395A (en) Network video playing method, device and system
US11545185B1 (en) Method and apparatus for frame accurate high resolution video editing in cloud using live video streams
CN110996160A (en) Video processing method and device, electronic equipment and computer readable storage medium
CN103686209A (en) Transcoding and processing method and system with diverse mechanisms
CN113011909B (en) Content delivery method, device, server and storage medium
CA2969721A1 (en) Location agnostic media control room and broadcasting facility
CN113315987A (en) Video live broadcast method and video live broadcast device
US10104142B2 (en) Data processing device, data processing method, program, recording medium, and data processing system
CN116156216A (en) Video processing method, device, electronic equipment and storage medium
KR101823767B1 (en) Multi-media file structure and system including meta information for providing user request and environment customize contents
KR101749420B1 (en) Apparatus and method for extracting representation image of video contents using closed caption
KR101603976B1 (en) Method and apparatus for concatenating video files
US10547878B2 (en) Hybrid transmission protocol

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination