CN115460436B - Video processing method, storage medium and electronic device - Google Patents

Video processing method, storage medium and electronic device Download PDF

Info

Publication number
CN115460436B
CN115460436B CN202210929019.9A CN202210929019A CN115460436B CN 115460436 B CN115460436 B CN 115460436B CN 202210929019 A CN202210929019 A CN 202210929019A CN 115460436 B CN115460436 B CN 115460436B
Authority
CN
China
Prior art keywords
frame inserting
frame
enhancement information
video
processing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210929019.9A
Other languages
Chinese (zh)
Other versions
CN115460436A (en
Inventor
梅大为
罗浩
江文斐
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Youku Technology Co Ltd
Original Assignee
Beijing Youku Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Youku Technology Co Ltd filed Critical Beijing Youku Technology Co Ltd
Priority to CN202210929019.9A priority Critical patent/CN115460436B/en
Publication of CN115460436A publication Critical patent/CN115460436A/en
Application granted granted Critical
Publication of CN115460436B publication Critical patent/CN115460436B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs
    • H04N21/2343Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
    • H04N21/234381Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements by altering the temporal resolution, e.g. decreasing the frame rate by frame skipping
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/4402Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
    • H04N21/440281Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display by altering the temporal resolution, e.g. by frame skipping

Abstract

The embodiment of the application discloses a video processing method and electronic equipment, wherein the method comprises the following steps: determining a target video to be played currently; obtaining a video stream of the target video and frame inserting enhancement information associated with the target video from a server side, wherein the frame inserting enhancement information is generated after the server determines a proper frame inserting processing mode according to the content characteristics of the target video; and decoding the video stream and the frame inserting enhancement information at the terminal equipment side, and carrying out frame inserting processing according to the video frames and the frame inserting enhancement information obtained by decoding so as to render and play according to the frame inserting processing result. According to the embodiment of the application, the frame inserting effect can be improved on the premise of reducing the transmission code rate overhead.

Description

Video processing method, storage medium and electronic device
Technical Field
The present application relates to the field of video frame insertion technology, and in particular, to a video processing method and an electronic device.
Background
The frame rate of video is the number of frames of images contained in the video per second, and the more Zhong Zhenshu frames per second, the smoother the displayed picture becomes, and thus, a high frame rate is a fundamental characteristic of ultra-high definition video. However, since the capturing and manufacturing costs of the high-frame-rate video are high, the content is rare, and thus the existing high-frame-rate video experience is mostly the result of processing by the interpolation technique.
The inter-frame technique is to generate and insert an intermediate frame by prediction based on an existing original video, thereby converting a low frame rate video into a high frame rate video. Although the experience with the frame insertion technique is positive overall, there are also some problems. For example, if the frame insertion process is performed at the server, the high frame rate video may increase the overhead in terms of the transmission rate. If the frame insertion process is performed at the terminal side, the actual frame insertion effect may not be ensured due to limitations of the terminal side in terms of computational power and the like.
Disclosure of Invention
The application provides a video processing method and electronic equipment, which can improve the frame inserting effect on the premise of reducing the transmission code rate overhead.
The application provides the following scheme:
a video processing method, comprising:
determining a target video to be played currently;
obtaining a video stream of the target video and frame inserting enhancement information associated with the target video from a server side, wherein the frame inserting enhancement information is generated after the server determines a proper frame inserting processing mode according to the content characteristics of the target video;
and decoding the video stream and the frame inserting enhancement information at the terminal equipment side, and carrying out frame inserting processing according to the video frames and the frame inserting enhancement information obtained by decoding so as to render and play according to the frame inserting processing result.
The frame inserting enhancement information is generated after the server side performs scene cut on the target video and determines a proper frame inserting processing mode according to the content characteristics of each scene cut.
The frame inserting enhancement information comprises a plurality of pieces of frame inserting enhancement information, wherein each piece of frame inserting enhancement information corresponds to at least one scene shot, and the frame inserting enhancement information comprises start-stop time point information of the corresponding scene shot and whether frame inserting and/or proper frame inserting parameter information are needed or not;
the decoding result comprises a plurality of image frames which are sequentially arranged on a time axis and time point information corresponding to each image frame;
and performing frame inserting processing according to the decoding result and the frame inserting enhancement information, wherein the frame inserting processing comprises the following steps:
and determining an image frame set suitable for carrying out frame inserting processing by adopting the same piece of frame inserting enhancement information according to the time point information corresponding to each image frame in the decoding result and the start-stop time point information corresponding to each piece of frame inserting enhancement information, and carrying out frame inserting processing on the corresponding image frame set by utilizing corresponding information about whether frame inserting is needed and/or suitable frame inserting parameters.
The frame inserting processing according to the decoding result and the frame inserting enhancement information comprises the following steps:
And providing the decoding result and the frame inserting enhancement information to a frame inserting processing module at the terminal equipment side so as to carry out frame inserting processing by the frame inserting processing module.
The frame inserting processing module comprises a system layer frame inserting chip in the terminal equipment;
the frame inserting processing module for providing the decoding result and the frame inserting enhancement information to the terminal equipment side comprises:
and providing the decoding result and the frame inserting enhancement information to the frame inserting chip by calling a system layer interface of the terminal equipment so that the frame inserting chip carries out frame inserting processing and obtains a frame inserting processing result.
The server generates multiple pieces of frame inserting enhancement information aiming at the same target video, and the frame inserting enhancement information is respectively used for adapting to different operating systems and data protocols and formats required by different frame inserting chips carried in different terminal equipment;
the obtaining the video stream of the target video from the server side and the frame inserting enhancement information associated with the target video includes:
acquiring an operating system carried by the terminal equipment and frame inserting chip information;
and submitting a request for acquiring the video stream to the server, wherein the request carries the operating system and the frame inserting chip information, so that the server returns the video stream of the target video and the frame inserting enhancement information which is associated with the target video and is matched with the operating system and the frame inserting chip information.
The frame inserting processing module comprises an application layer frame inserting processing module which is deployed in a programmable logic device of the terminal equipment;
the frame inserting processing module for providing the decoding result and the frame inserting enhancement information to the terminal equipment side comprises:
and providing the decoding result and the frame inserting enhancement information for an application layer frame inserting processing module at the terminal equipment side so that the application layer frame inserting processing module carries out frame inserting processing and obtains a frame inserting processing result.
A video processing method, comprising:
determining a proper frame inserting processing mode according to the content characteristics of the target video;
generating frame inserting enhancement information for the target video according to the frame inserting processing mode;
and when the video stream of the target video is provided for the client, the frame inserting enhancement information is also provided, so that after the client decodes the video stream, the client decodes the video stream at the terminal equipment side, and frame inserting processing is performed according to a decoding result and the frame inserting enhancement information, so that rendering and playing are performed according to a frame inserting processing result.
The determining a suitable frame inserting processing mode according to the content characteristics of the target video comprises the following steps:
Performing scene cut segmentation on the target video;
according to the content characteristics corresponding to each scene shot, respectively determining a proper frame inserting processing mode for each scene shot;
generating the frame inserting enhancement information for the target video according to the frame inserting processing mode comprises the following steps:
and generating the frame inserting enhancement information for the target video according to the frame inserting processing modes respectively corresponding to the scene shots.
Wherein when the video stream of the target video is provided to the client, the frame inserting enhancement information is also provided, including:
and providing the inserted frame enhancement information corresponding to the target video to the client in the form of an independent information file.
Wherein when the video stream of the target video is provided to the client, the frame inserting enhancement information is also provided, including:
and inserting the frame inserting enhancement information into the video stream and providing the video stream to the client.
A video play processing device, comprising:
the target video determining unit is used for determining a target video to be played currently;
the information acquisition unit is used for acquiring the video stream of the target video and the frame inserting enhancement information associated with the target video from a server side, wherein the frame inserting enhancement information is generated after the server determines a proper frame inserting processing mode according to the content characteristics of the target video;
And the frame inserting processing unit is used for decoding the video stream at the terminal equipment side, and carrying out frame inserting processing according to the decoding result and the frame inserting enhancement information so as to render and play according to the frame inserting processing result.
A video processing apparatus comprising:
the frame inserting processing mode determining unit is used for determining a proper frame inserting processing mode according to the content characteristics of the target video;
the frame inserting enhancement information generating unit is used for generating frame inserting enhancement information for the target video according to the frame inserting processing mode;
and the frame inserting enhancement information providing unit is used for providing the frame inserting enhancement information when the video stream of the target video is provided for the client, so that after the client decodes the video stream, the terminal equipment side where the client is located decodes the video stream, and performs frame inserting processing according to a decoding result and the frame inserting enhancement information, so as to render and play according to a frame inserting processing result.
A computer readable storage medium having stored thereon a computer program which when executed by a processor performs the steps of the method of any of the preceding claims.
An electronic device, comprising:
One or more processors; and
a memory associated with the one or more processors, the memory for storing program instructions that, when read for execution by the one or more processors, perform the steps of the method of any of the preceding claims.
According to the specific embodiment provided by the application, the application discloses the following technical effects:
according to the embodiment of the application, the video can be analyzed at the server side, and the frame inserting processing mode suitable for the specific video content characteristics is determined, so that the frame inserting enhancement information of the video is generated. Therefore, when a client needs to play a certain target video, the video stream of the target video and the corresponding frame inserting enhancement information can be obtained from the server, then the video stream can be decoded at the terminal equipment side, and frame inserting processing is performed according to the decoding result and the frame inserting enhancement information, so that rendering and playing can be performed according to the frame inserting processing result. In this way, since the specific frame inserting process is completed at the terminal device side, the transmission of the video stream with high frame rate is not involved, and the overhead of the frame inserting enhancement information in terms of code rate is relatively low, compared with the mode of directly inserting frames at the service end side, the overhead in terms of the transmission code rate can be reduced, and the occurrence probability of the phenomenon of blocking and the like can be reduced. In addition, although the frame inserting processing is completed at the terminal equipment side, the server side provides the frame inserting enhancement information which is generated after the proper frame inserting processing mode is determined according to the content characteristics of the video, so that the frame inserting processing result at the terminal equipment side can be more suitable for the video content, and a better frame inserting effect is obtained.
In a preferred mode, after the video is cut at the server side, a suitable frame inserting processing mode including whether frame inserting and/or frame inserting parameter information is determined for each scene lens, and then frame inserting enhancement information of the video can be generated according to the frame inserting processing mode information respectively suitable for a plurality of scene lenses in the same video, so that the frame inserting effect is further improved.
Of course, it is not necessary for any one product to practice the application to achieve all of the advantages set forth above at the same time.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings that are needed in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic diagram of a system architecture provided by an embodiment of the present application;
FIG. 2 is a flow chart of a first method provided by an embodiment of the present application;
FIG. 3 is a schematic diagram of a data flow provided by an embodiment of the present application;
FIG. 4 is a flow chart of a second method provided by an embodiment of the present application;
FIG. 5 is a schematic diagram of a first apparatus provided by an embodiment of the present application;
FIG. 6 is a schematic diagram of a second apparatus provided by an embodiment of the present application;
fig. 7 is a schematic diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present application, but not all embodiments. All other embodiments, which are derived by a person skilled in the art based on the embodiments of the application, fall within the scope of protection of the application.
Firstly, it should be noted that, in the process of implementing the present application, the inventor of the present application finds that in the prior art, there are two main frame inserting modes, one mode is to insert frames at the cloud end, when the terminal needs to play video, the cloud end directly provides a video stream with high frame rate to the terminal side, and the terminal side plays the video stream. However, this method increases the overhead of the transmission rate (the amount of data transmitted per unit time, in bps) and causes a problem of jamming, and at the same time, causes a problem of system complexity and an increase in production cost. The other way is to perform frame inserting processing on the terminal side, that is, the cloud side provides the low-frame-rate code stream for the terminal, and the terminal performs frame inserting processing first and then displays. However, when the terminal side performs the frame insertion processing, one method is implemented by using a chip having the frame insertion capability in the system layer, and the other method is implemented by an application layer frame insertion processing module when the terminal side is equipped with a programmable logic device. That is, the same set of algorithm is used to perform frame insertion processing on a plurality of different videos to be played at the terminal side. When a specific video is subjected to frame insertion processing, analysis and judgment can be performed only according to video content in a very short local time window, and when shot switching or even switching of the whole video occurs, a frame insertion chip or an application layer frame insertion processing module at the bottom layer cannot sense, so that frame insertion parameters cannot be adjusted in time. In short, the actual situation of the specific video is not distinguished in the algorithm level, and the change of the video content cannot be perceived, so that the frame inserting effect is influenced. Such effects include the fact that the frame is inserted at a location unsuitable for the frame insertion, so that the original expression intention or artistic effect of the video creator is destroyed instead, and so on. For example, the motion information in a video is not so rich, and at this time, the frame insertion process may not be necessary. Alternatively, although some videos have rich motion information, it is possible that the director may prefer to express effects such as jitter or tremble at a low frame rate, and at this time, if the interpolation is performed, the intention of the director is destroyed instead, and so on.
Therefore, in the embodiment of the application, in order to solve the problems in the scheme of inserting frames at the cloud or at the terminal side in the prior art, a new solution is provided. In the scheme, the frame inserting process can be performed in a mode of combining the cloud end with the terminal, namely, the content characteristics of the video are extracted in the cloud end through modes of algorithm processing and the like, and frame inserting enhancement information of a specific video is obtained, wherein the frame inserting enhancement information comprises whether frame inserting is needed, particularly suitable frame inserting parameters and the like. Thus, when the terminal side needs to play a specific video, the video stream and the frame inserting enhancement information of the video can be provided to the terminal side, and then the terminal side performs frame inserting processing according to the frame inserting enhancement information.
The present inventors have found that, in implementing the present application, a video is typically composed of a plurality of scene shots, for example, a scene shot including an outdoor environment in a certain video, and then converted to a scene shot in which two persons talk indoors, and so on. Each scene shot is composed of a plurality of image frames arranged in a time dimension. For video creators, the expression of specific artistic effects and other aspects is usually in units of scene shots, that is, the same scene shots can usually use the same frame rate and the like, and meanwhile, the requirements for frame interpolation are usually the same, that is, each image frame in the same scene shot needs to be interpolated, or neither frame interpolation nor frame interpolation, if necessary, the parameters of the specifically needed frame interpolation can be the same, so that the uniformity of the picture effects in the same scene shot is ensured. Therefore, in a specific implementation, when the cloud side generates the frame inserting enhancement information for the specific video, the scene shot recognition can be performed on the specific video, and then the frame inserting enhancement information corresponding to each scene shot is respectively obtained. That is, each scene shot may correspond to one piece of the inter-frame enhancement information, and thus, the inter-frame enhancement information of one video may be composed of a plurality of pieces of the inter-frame enhancement information, each piece of the inter-frame enhancement information corresponding to one of the scene shots. Or if most of the scene shots have the same frame inserting enhancement information, the frame inserting enhancement information of the scene shots can be combined and expressed, and then the frame inserting enhancement information of a small number of different scene shots can be expressed independently, so that the information quantity of the frame inserting enhancement information associated with the same video is small, and the occupation of network resources can be reduced in the network transmission process. Each scene shot may correspond to a start-stop time period, and the specific frame inserting enhancement information may include information that whether the corresponding scene shot needs frame inserting, specific frame inserting parameters, and the like affect the actual frame inserting effect.
The frame inserting enhancement information can be realized through algorithm analysis, or can be obtained through manual addition or the like. Therefore, after the frame inserting enhancement information is provided as the supplementary information of the specific video to the terminal side, the terminal side can perform frame inserting processing according to the frame inserting enhancement information, so that a frame inserting result is more suitable for the video content, a better frame inserting effect is obtained, and the probability of damaging the representing intention or artistic effect of the video is reduced. Meanwhile, because the frame inserting enhancement information is completed in the cloud, excessive occupation of computing resources at the terminal side is not caused.
From the perspective of system architecture, as shown in fig. 1, the embodiment of the application can provide a frame inserting processing scheme combining a cloud end and a terminal for an application or a platform related to video playing. Specifically, at the application level, a server and a client of a related application program may be involved, and at the system level, a terminal device (for example, a television, a mobile phone, etc.) needs to be equipped with a frame inserting chip with frame inserting capability to complete specific frame inserting processing. Alternatively, if the terminal device is equipped with a programmable logic device, the frame inserting processing module may be implemented at the application level, so that the decoding and frame inserting processing of the video may be completed at the application level entirely. At the server side (i.e., the cloud end), a related algorithm model may be provided, for example, a specific scene lens recognition algorithm, a related algorithm for generating the frame insertion enhancement information by taking the scene lens as a unit, and the like may be included. The specific algorithm can be obtained by training in advance, so that scene shot identification and frame inserting enhancement information generation of specific videos can be completed. Or, the server may also provide an operation entry for manually auditing or adding the enhancement information of the frame insertion, so that the specific enhancement information of the frame insertion may also be generated by combining an algorithm with the manual operation. After generating the enhancement information of the frame insertion, the enhancement information of the frame insertion can be saved in the form of a separate information file, and when the video stream is provided to the client, an information channel can be additionally created to provide the enhancement information of the frame insertion to the client. Alternatively, the enhancement information may be inserted into the video stream and provided to the client along with the video stream. The client can decode the video stream, perform time alignment and other processes of the audio and video, and then provide the decoded video stream with the frame inserting enhancement information to a frame inserting chip or a frame inserting processing module of an application layer. In this way, the process of specific frame inserting processing can obtain the frame inserting result which is more suitable for specific video content based on the frame inserting enhancement information.
The following describes in detail the specific implementation scheme provided by the embodiment of the present application.
Example 1
First, the embodiment provides a video playing processing method from the perspective of a client, referring to fig. 2, the method specifically may include:
s201: and determining the target video to be played currently.
Specifically, the client may provide a user with an interface such as a home page, and the user may select a video to be watched through the specific interface, where the client may determine, according to a request from the user to play a specified video, the specified video of the user as a target video. Alternatively, the video automatically played by the system may also be determined to be the target video, and so on.
S202: and acquiring the video stream of the target video and the frame inserting enhancement information associated with the target video from a server side, wherein the frame inserting enhancement information is generated after the server determines a proper frame inserting processing mode according to the content characteristics of the target video.
After determining the target video, a specific video stream can be requested to be acquired from the server side, and in the embodiment of the application, the client side can acquire the video stream from the server side and can acquire the specific frame inserting enhancement information at the same time because the server side can acquire the frame inserting enhancement information of the specific video in advance.
The frame insertion enhancement information may be acquired at the server side. Specifically, the content features of the video can be extracted through an algorithm model, and then a proper frame inserting processing mode is analyzed. As described above, since the same video may include a plurality of different scene shots, the content features of the different scene shots may be different, so in order to more accurately express the frame inserting processing mode suitable for a specific video, the scene shot may be firstly split from the video, then the suitable frame inserting processing mode is determined according to the content features of each scene shot, and frame inserting enhancement information is generated respectively, and then the frame inserting enhancement information of the whole video is generated according to the frame inserting enhancement information respectively corresponding to each scene shot.
In order to achieve the above object, a first algorithm model for scene shot recognition and a second algorithm model for frame insertion enhancement information generation for a specific scene shot may be deployed at a service end. Wherein, the first algorithm model and the second algorithm model can be generated by a pre-training mode. For example, a video clip with scene cut class annotation information may be used as a training sample to train a first algorithm model, and the trained model may be used to cut and identify a scene cut for a specific video. For the second algorithm model, a plurality of video clips can also be used as training samples, wherein each training sample can correspond to a specific scene shot, and particularly suitable frame inserting enhancement information can be marked. Such video clips may then be used as inputs to train with the goal of outputting the enhancement information appropriate for the particular video clip. In this way, the trained second algorithm model may perform content feature extraction on a specific input scene shot, where specific content features may be features with preset frame insertion semantics, for example, specific content features may include whether large motion or small motion, whether static, whether including features in a spatial dimension such as repeated frames, whether there are small objects in a local area, objects that occur periodically, subtitles, and the like. Based on the above features, the frame insertion enhancement information suitable for the scene cut can be output.
Of course, since the same class of scene shots may generally use the same or similar expressions, they may also converge in terms of the need for frame insertion, e.g., a class of scene shots may not need frame insertion, a class of scene shots may have a stronger need for frame insertion, etc. In addition, since the same video creator may habitually use the same expression method or methods when creating a certain type of scene shots, that is, the scene shots created by the same video creator may be relatively convergent in terms of frame insertion requirement, etc., specific input information may include, in addition to the video clip itself, scene shot category information, creator identification information, etc. in the process of performing model training.
After the training of the first algorithm model and the second algorithm model is completed in the above manner, the video can be segmented through the first algorithm model for specific videos, and information such as the category to which the specific scene belongs can be determined. And then generating frame inserting enhancement information for each split scene lens through a second algorithm model. For the second algorithm model, if information such as scene shot category information and video creator identification is used in training, the information such as scene shot category information and video creator identification can be added into the input information of the second algorithm model in prediction. The frame inserting enhancement information specifically output by the second algorithm model may be parameter information playing a key role in frame inserting effect, for example, whether to fall back, a global motion speed of a scene, a scene lens classification and the like, and may be specifically set according to actual requirements.
In addition, in practical application, the segmentation of scene shots and the generation of the enhancement information of the inserted frames can be realized by adding manual verification or manual addition, so that the result of algorithm identification can be further confirmed, or if the situation of inaccurate identification and the like exists, the situation can be corrected by a manual intervention mode, and the like.
In summary, for an input video, multiple scene shots may be output through the first algorithm model, and the interpolated enhancement information corresponding to each scene shot may be output through the second algorithm model. Thus, the frame inserting enhancement information corresponding to each of the plurality of scene shots can form the frame inserting enhancement information of the input video. When the frame inserting enhancement information corresponding to each of the plurality of scene shots is stored, information such as a start-stop time point corresponding to each of the plurality of scene shots can be recorded, so that a plurality of image frames in the scene shots marked at the start-stop time point can be obtained, and the frame inserting processing can be performed by utilizing the corresponding frame inserting enhancement information.
For example, the frame insertion enhancement information corresponding to a certain video may be expressed as:
{ scene cut 1: time t1 to time t2, no frame insertion is required;
Scene lens 2: time t3 to time t4, a frame needs to be inserted, and a frame inserting parameter combination (which can comprise a plurality of frame inserting parameters) 1;
scene lens 3: time t 5-time t6, frame insertion is needed, and frame insertion parameters are combined 2;
……}
as can be seen from the above-described interpolation enhancement information, the interpolation processing is not required for each image frame between time t1 and time t 2. For each image frame between time t3 and time t4, which is suitable for frame interpolation using frame interpolation parameter set 1, and each image frame between time t5 and time t6, which is suitable for frame interpolation using frame interpolation parameter set 2, etc., frame interpolation processing is required.
Or after a plurality of different scene shots are cut out from the same video, it may be found that most of the scene shots correspond to the same frame inserting enhancement information, at this time, when the frame inserting enhancement information of the video is expressed, the same frame inserting enhancement information corresponding to most of the scene shots can be used as default frame inserting enhancement information of the video, and meanwhile, special frame inserting enhancement information corresponding to a few scene shots can be separately described. In this way, the enhancement information of the frame insertion of the same video can be expressed by shorter information.
For example, the frame insertion enhancement information corresponding to a certain video may be expressed as:
{ scene cut 1: time t1 to time t2, no frame insertion is required;
scene lens 2: time t3 to time t4, a frame needs to be inserted, and a frame inserting parameter combination (which can comprise a plurality of frame inserting parameters) 1; other scene shots: frame insertion is required, frame insertion parameter combination 2}
After generating the enhancement information for a particular video, it may be provided to the client in a number of ways. For example, in one manner, specific enhancement information may exist in the form of a separate information file. Thus, when the client requests to play a certain target video, two connections can be established between the server and the client, wherein one connection is a streaming media transmission connection for transmitting a specific video stream from the server to the client, and the other connection is a file transmission connection for transmitting a file with the inserted frame enhancement information stored therein to the client.
Alternatively, the frame inserting enhancement information may be inserted into a specific video stream by means of SEI (Supplemental Enhancement Information, supplemental enhancement information of video) or the like, and transmitted to the client, so that the client may directly parse the frame inserting enhancement information from the video stream.
S203: and decoding the video stream at the terminal equipment side, and performing frame inserting processing according to the decoding result and the frame inserting enhancement information so as to render and play according to the frame inserting processing result.
After receiving the video stream and the frame inserting enhancement information, the client can decode the video stream, perform alignment and other processes on the audio and video on a time axis, and then perform frame inserting process according to the decoding result and the frame inserting enhancement information on the terminal equipment side where the client is located.
That is, as shown in fig. 3, in the embodiment of the present application, the server side may generate the frame inserting enhancement information for a specific video, and in an alternative embodiment, the video may be first cut into scene shots, and then specific frame inserting enhancement information may be generated according to content features with frame inserting semantics included in each scene shot. Therefore, when a video stream of a specific video is provided for a client, the frame inserting enhancement information of the video can be provided, and the client can perform frame inserting processing according to the specific frame inserting enhancement information after decoding, so that a video frame with a high frame rate is obtained and then rendering and displaying are performed.
In particular, the frame inserting process at the terminal device side is usually implemented by a separate frame inserting processing module. Therefore, the client can provide the decoding result and the frame inserting enhancement information to the frame inserting processing module at the terminal equipment side so that the frame inserting processing module can perform frame inserting processing.
Specifically, the frame inserting processing module is implemented by hardware, or may be implemented in a software form. The former can be realized by a separate frame inserting chip, and the frame inserting chip belongs to a system layer, so that the final video playing can be realized by a cross-layer matching mode. That is, after the decoding is completed, the client program of the application layer may provide the decoding result and the frame inserting enhancement information to the frame inserting chip of the system layer, and the frame inserting chip performs frame inserting processing on the specifically decoded image frame according to the frame inserting enhancement information. Specifically, since the frame inserting chip belongs to the system layer, the client of the upper layer application can call the system layer interface of the terminal device and provide the decoding result and the frame inserting enhancement information to the frame inserting chip, so that the frame inserting chip performs frame inserting processing and obtains the frame inserting processing result.
It should be noted that, in the above implementation manner, the method may be performed with the system layer in advance, including obtaining an interface of the system layer, and a specific data protocol, a format, and the like of a specific frame inserting chip, so that specific frame inserting enhancement information can be transferred to the frame inserting chip, and the frame inserting chip can analyze and identify the specific frame inserting enhancement information, so that the frame inserting chip can perform frame inserting processing on a specific video decoding result according to the frame inserting enhancement information.
Because the operating systems and the frame inserting chips used by different terminal devices may be different, the frame inserting enhancement information generated by the server side aiming at the same target video may be multiple copies, which are respectively used for adapting to the data protocols and formats required by different operating systems and different frame inserting chips carried in different terminal devices. That is, although the brands and models of the terminal devices are numerous, there may be a few operating systems and plug-in chips mounted, and therefore, various combinations of the operating systems and the plug-in chips may be listed, and then corresponding plug-in enhancement information may be generated for each combination. Of course, the content of each intervening frame enhancement information may be the same for the same target video, but may differ in terms of data protocol, format, etc. In this way, when the client requests to obtain the video stream and the frame inserting enhancement information from the server, the client can firstly obtain the operating system and the frame inserting chip information carried by the terminal equipment, then submit the request for obtaining the video stream to the server, and carry the operating system and the frame inserting chip information in the request, so that the server can return the video stream of the target video and the frame inserting enhancement information associated with the target video and adapted to the operating system and the frame inserting chip information.
Alternatively, if the terminal device is loaded with a programmable logic device (e.g., NPU (nerve-network Process Units, network processor) or APU (Accelerated Processing Unit, acceleration processor), etc.), an application layer plug-in processing module may be deployed in such a programmable logic device. That is, a specific interpolation processing algorithm may be implemented at the application layer, in which case, after decoding is completed, the client may directly provide the decoding result and the interpolation enhancement information to such an application layer interpolation processing module, so that the application layer interpolation processing module may perform interpolation processing and obtain the interpolation processing result. In this way, the client program and the frame inserting processing module are both located at the application layer, so that the problems of getting through with the system layer and the like are not involved, and further, the frame inserting enhancement information of a plurality of different versions is not required to be provided for the same video at the server side, so that the implementation is more convenient. Of course, in practical applications, not all terminal devices may be equipped with the programmable logic device, and thus, a specific implementation manner may be selected according to the actual situation of the terminal device.
Specifically, no matter the specific frame inserting processing is performed through the frame inserting chip or the frame inserting processing module of the application layer, specific frame inserting enhancement information can be used as input of a specific frame inserting algorithm, and then frame inserting processing is performed on each image frame in the decoding result. Specifically, the specifically decoded image frames may be sequentially arranged in the time dimension, each image frame may be associated with specific time point information, and the frame insertion enhancement information also includes information such as start and stop time points of a specific scene shot, so that it can be known which image frames are suitable for what frame insertion mode is used. For example, if a certain piece of frame inserting enhancement information is that a frame is required to be inserted from time t1 to time t2 and a specific frame inserting parameter is specified, a specific frame inserting chip or an application layer frame inserting processing module may perform frame inserting processing on each image frame in a specific decoding result according to the above information. For example, if some image frames in the current sliding window fall in the interval from time t1 to time t2, the corresponding frame inserting parameters may be used to perform frame inserting processing on the image frames, and so on.
In summary, through the embodiment of the application, the video can be analyzed at the server side to determine the processing mode of the inserting frames suitable for the specific video content characteristics, thereby generating the inserting frame enhancement information of the video. Therefore, when a client needs to play a certain target video, the video stream of the target video and the corresponding frame inserting enhancement information can be obtained from the server, then the video stream can be decoded at the terminal equipment side, and frame inserting processing is performed according to the decoding result and the frame inserting enhancement information, so that rendering and playing can be performed according to the frame inserting processing result. In this way, since the specific frame inserting process is completed at the terminal device side, the transmission of the video stream with high frame rate is not involved, and the overhead of the frame inserting enhancement information in terms of code rate is relatively low, compared with the mode of directly inserting frames at the service end side, the overhead in terms of the transmission code rate can be reduced, and the occurrence probability of the phenomenon of blocking and the like can be reduced. In addition, although the frame inserting processing is completed at the terminal equipment side, the server side provides the frame inserting enhancement information which is generated after the proper frame inserting processing mode is determined according to the content characteristics of the video, so that the frame inserting processing result at the terminal equipment side can be more suitable for the video content, and a better frame inserting effect is obtained.
In a preferred mode, after the video is cut at the server side, a suitable frame inserting processing mode including whether frame inserting and/or frame inserting parameter information is determined for each scene lens, and then frame inserting enhancement information of the video can be generated according to the frame inserting processing mode information respectively suitable for a plurality of scene lenses in the same video, so that the frame inserting effect is further improved.
Example two
The second embodiment corresponds to the first embodiment, and from the perspective of the server, a video processing method is provided, referring to fig. 4, where the method may include:
s401: determining a proper frame inserting processing mode according to the content characteristics of the target video;
s402: generating frame inserting enhancement information for the target video according to the frame inserting processing mode;
s403: and when the video stream of the target video is provided for the client, the frame inserting enhancement information is also provided, so that after the client decodes the video stream, the client decodes the video stream at the terminal equipment side, and frame inserting processing is performed according to a decoding result and the frame inserting enhancement information, so that rendering and playing are performed according to a frame inserting processing result.
Specifically, a scene cut may be first performed on a target video; and then, respectively determining a proper frame inserting processing mode for each scene shot according to the content characteristics respectively corresponding to each scene shot. Thus, the frame inserting enhancement information can be generated for the target video according to the frame inserting processing mode respectively corresponding to each scene lens.
Specifically, when the video stream of the target video and the frame inserting enhancement information are provided to the client, the frame inserting enhancement information corresponding to the target video can be provided to the client in the form of a separate information file. Or inserting the frame inserting enhancement information into the video stream and providing the video stream to the client.
For the undescribed parts in the second embodiment, reference may be made to the descriptions in the first embodiment, and the description is omitted here.
It should be noted that, in the embodiment of the present application, the use of user data may be involved, and in practical application, the user specific personal data may be used in the solution described herein within the scope allowed by the applicable legal regulations in the country under the condition of meeting the applicable legal regulations in the country (for example, the user explicitly agrees to the user to notify practically, etc.).
Corresponding to the first embodiment, the embodiment of the present application further provides a video playing processing device, referring to fig. 5, the device may include:
a target video determining unit 501, configured to determine a target video to be played currently;
the information obtaining unit 502 is configured to obtain, from a server side, a video stream of the target video and frame inserting enhancement information associated with the target video, where the frame inserting enhancement information is generated after the server determines a suitable frame inserting processing mode according to content features of the target video;
and the frame inserting processing unit 503 is configured to decode the video stream at a terminal device side, and perform frame inserting processing according to a decoding result and the frame inserting enhancement information, so as to perform rendering and playing according to a frame inserting processing result.
The frame inserting enhancement information is generated after the server side performs scene cut on the target video and determines a proper frame inserting processing mode according to the content characteristics of each scene cut.
At this time, the frame inserting enhancement information may include a plurality of pieces, where each piece of frame inserting enhancement information corresponds to at least one scene shot, and includes start-stop time point information of the corresponding scene shot, and whether frame inserting and/or suitable frame inserting parameter information is required;
The decoding result comprises a plurality of image frames which are sequentially arranged on a time axis and time point information corresponding to each image frame;
the frame inserting processing unit may specifically include:
and determining an image frame set suitable for carrying out frame inserting processing by adopting the same piece of frame inserting enhancement information according to the time point information corresponding to each image frame in the decoding result and the start-stop time point information corresponding to each piece of frame inserting enhancement information, and carrying out frame inserting processing on the corresponding image frame set by utilizing corresponding information about whether frame inserting is needed and/or suitable frame inserting parameters.
Specifically, the frame inserting processing unit may specifically be configured to include:
and providing the decoding result and the frame inserting enhancement information to a frame inserting processing module at the terminal equipment side so as to carry out frame inserting processing by the frame inserting processing module.
The frame inserting processing module comprises a system layer frame inserting chip in the terminal equipment;
at this time, the frame insertion processing unit may specifically be configured to:
and providing the decoding result and the frame inserting enhancement information to the frame inserting chip by calling a system layer interface of the terminal equipment so that the frame inserting chip carries out frame inserting processing and obtains a frame inserting processing result.
The server side can generate multiple pieces of frame inserting enhancement information aiming at the same target video, and the frame inserting enhancement information can be respectively used for adapting to different operating systems and data protocols and formats required by different frame inserting chips carried in different terminal equipment;
at this time, the information acquisition unit may specifically be configured to:
acquiring an operating system carried by the terminal equipment and frame inserting chip information;
and submitting a request for acquiring the video stream to the server, wherein the request carries the operating system and the frame inserting chip information, so that the server returns the video stream of the target video and the frame inserting enhancement information which is associated with the target video and is matched with the operating system and the frame inserting chip information.
The frame inserting processing module comprises an application layer frame inserting processing module which is deployed in a programmable logic device of the terminal equipment;
at this time, the frame insertion processing unit may specifically be configured to:
and providing the decoding result and the frame inserting enhancement information for an application layer frame inserting processing module at the terminal equipment side so that the application layer frame inserting processing module carries out frame inserting processing and obtains a frame inserting processing result.
Corresponding to the embodiment, the embodiment of the present application further provides a video processing apparatus, referring to fig. 6, the apparatus may include:
A frame inserting processing mode determining unit 601, configured to determine an appropriate frame inserting processing mode according to content features of the target video;
the frame inserting enhancement information generating unit 602 is configured to generate frame inserting enhancement information for the target video according to the frame inserting processing mode;
and the frame inserting enhancement information providing unit 603 is configured to provide the frame inserting enhancement information when the video stream of the target video is provided to the client, so that after the client decodes the video stream, the client decodes the video stream at a terminal device side where the client is located, and performs frame inserting processing according to a decoding result and the frame inserting enhancement information, so as to perform rendering and playing according to a frame inserting processing result.
The frame inserting processing mode determining unit may specifically be configured to:
performing scene cut segmentation on the target video;
according to the content characteristics corresponding to each scene shot, respectively determining a proper frame inserting processing mode for each scene shot;
the frame insertion enhancement information generating unit may specifically be configured to:
and generating the frame inserting enhancement information for the target video according to the frame inserting processing modes respectively corresponding to the scene shots.
Specifically, the frame insertion enhancement information providing unit may specifically be configured to:
And providing the inserted frame enhancement information corresponding to the target video to the client in the form of an independent information file.
Alternatively, the frame insertion enhancement information providing unit may specifically be configured to:
and inserting the frame inserting enhancement information into the video stream and providing the video stream to the client.
In addition, the embodiment of the application also provides a computer readable storage medium, on which a computer program is stored, which when being executed by a processor, implements the steps of the method of any one of the previous method embodiments.
And an electronic device comprising:
one or more processors; and
a memory associated with the one or more processors for storing program instructions that, when read for execution by the one or more processors, perform the steps of the method of any of the preceding method embodiments.
In which fig. 7 illustrates an architecture of an electronic device, for example, device 700 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, an exercise device, a personal digital assistant, an aircraft, and so forth.
Referring to fig. 7, device 700 may include one or more of the following components: a processing component 702, a memory 704, a power component 706, a multimedia component 708, an audio component 710, an input/output (I/O) interface 712, a sensor component 714, and a communication component 716.
The processing component 702 generally controls overall operation of the device 700, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing element 702 may include one or more processors 720 to execute instructions to perform all or part of the steps of the methods provided by the disclosed subject matter. Further, the processing component 702 can include one or more modules that facilitate interaction between the processing component 702 and other components. For example, the processing component 702 may include a multimedia module to facilitate interaction between the multimedia component 708 and the processing component 702.
Memory 704 is configured to store various types of data to support operations at device 700. Examples of such data include instructions for any application or method operating on device 700, contact data, phonebook data, messages, pictures, videos, and the like. The memory 704 may be implemented by any type or combination of volatile or nonvolatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disk.
The power supply component 706 provides power to the various components of the device 700. Power supply components 706 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for device 700.
The multimedia component 708 includes a screen between the device 700 and the user that provides an output interface. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive input signals from a user. The touch panel includes one or more touch sensors to sense touches, swipes, and gestures on the touch panel. The touch sensor may sense not only the boundary of a touch or sliding action, but also the duration and pressure associated with the touch or sliding operation. In some embodiments, the multimedia component 708 includes a front-facing camera and/or a rear-facing camera. The front-facing camera and/or the rear-facing camera may receive external multimedia data when the device 700 is in an operational mode, such as a shooting mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have focal length and optical zoom capabilities.
The audio component 710 is configured to output and/or input audio signals. For example, the audio component 710 includes a Microphone (MIC) configured to receive external audio signals when the device 700 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may be further stored in the memory 704 or transmitted via the communication component 716. In some embodiments, the audio component 710 further includes a speaker for outputting audio signals.
The I/O interface 712 provides an interface between the processing component 702 and peripheral interface modules, which may be a keyboard, click wheel, buttons, etc. These buttons may include, but are not limited to: homepage button, volume button, start button, and lock button.
The sensor assembly 714 includes one or more sensors for providing status assessment of various aspects of the device 700. For example, the sensor assembly 714 may detect an on/off state of the device 700, a relative positioning of the components, such as a display and keypad of the device 700, a change in position of the device 700 or a component of the device 700, the presence or absence of user contact with the device 700, an orientation or acceleration/deceleration of the device 700, and a change in temperature of the device 700. The sensor assembly 714 may include a proximity sensor configured to detect the presence of nearby objects without any physical contact. The sensor assembly 714 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 714 may also include an acceleration sensor, a gyroscopic sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 716 is configured to facilitate communication between the device 700 and other devices, either wired or wireless. The device 700 may access a wireless network based on a communication standard, such as WiFi, or a mobile communication network of 2G, 3G, 4G/LTE, 5G, etc. In one exemplary embodiment, the communication part 716 receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 716 further includes a Near Field Communication (NFC) module to facilitate short range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, ultra Wideband (UWB) technology, bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the apparatus 700 may be implemented by one or more Application Specific Integrated Circuits (ASICs), digital Signal Processors (DSPs), digital Signal Processing Devices (DSPDs), programmable Logic Devices (PLDs), field Programmable Gate Arrays (FPGAs), controllers, microcontrollers, microprocessors, or other electronic elements for executing the methods described above.
In an exemplary embodiment, a non-transitory computer readable storage medium is also provided, such as memory 704 including instructions executable by processor 720 of device 700 to perform the methods provided by the disclosed subject matter. For example, the non-transitory computer readable storage medium may be ROM, random Access Memory (RAM), CD-ROM, magnetic tape, floppy disk, optical data storage device, etc.
From the above description of embodiments, it will be apparent to those skilled in the art that the present application may be implemented in software plus a necessary general hardware platform. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art in the form of a software product, which may be stored in a storage medium, such as a ROM/RAM, a magnetic disk, an optical disk, etc., including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the method described in the embodiments or some parts of the embodiments of the present application.
In this specification, each embodiment is described in a progressive manner, and identical and similar parts of each embodiment are all referred to each other, and each embodiment mainly describes differences from other embodiments. In particular, for a system or system embodiment, since it is substantially similar to a method embodiment, the description is relatively simple, with reference to the description of the method embodiment being made in part. The systems and system embodiments described above are merely illustrative, wherein the elements illustrated as separate elements may or may not be physically separate, and the elements shown as elements may or may not be physical elements, may be located in one place, or may be distributed over a plurality of network elements. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment. Those of ordinary skill in the art will understand and implement the present application without undue burden.
The video processing method and the electronic device provided by the application are described in detail, and specific examples are applied to illustrate the principle and the implementation of the application, and the description of the above examples is only used for helping to understand the method and the core idea of the application; also, it is within the scope of the present application to be modified by those of ordinary skill in the art in light of the present teachings. In view of the foregoing, this description should not be construed as limiting the application.

Claims (10)

1. A video processing method, comprising:
determining a target video to be played currently;
obtaining a video stream of the target video and frame inserting enhancement information associated with the target video from a server side, wherein the frame inserting enhancement information comprises a plurality of pieces of frame inserting enhancement information, each piece of frame inserting enhancement information corresponds to at least one scene shot of the target video, and each piece of frame inserting enhancement information is generated after the server side determines a proper frame inserting processing mode for each scene shot according to the content characteristics of each scene shot and expression technical information which is used by an creator of the target video when the creator creates the corresponding scene shot;
And decoding the video stream and the frame inserting enhancement information at the terminal equipment side, and carrying out frame inserting processing according to the video frames and the frame inserting enhancement information obtained by decoding so as to render and play according to the frame inserting processing result.
2. The method of claim 1, wherein the step of determining the position of the substrate comprises,
the frame inserting enhancement information is generated after the server side performs scene shot segmentation on the target video and determines a proper frame inserting processing mode according to the content characteristics of each scene shot.
3. The method of claim 2, wherein the step of determining the position of the substrate comprises,
each piece of frame inserting enhancement information comprises start and stop time point information of a corresponding scene shot and whether frame inserting and/or proper frame inserting parameter information are needed;
the decoding result comprises a plurality of image frames which are sequentially arranged on a time axis and time point information corresponding to each image frame;
and performing frame inserting processing according to the decoding result and the frame inserting enhancement information, wherein the frame inserting processing comprises the following steps:
and determining an image frame set suitable for carrying out frame inserting processing by adopting the same piece of frame inserting enhancement information according to the time point information corresponding to each image frame in the decoding result and the start-stop time point information corresponding to each piece of frame inserting enhancement information, and carrying out frame inserting processing on the corresponding image frame set by utilizing corresponding information about whether frame inserting is needed and/or suitable frame inserting parameters.
4. A method according to any one of claim 1 to 3, wherein,
and performing frame inserting processing according to the decoding result and the frame inserting enhancement information, wherein the frame inserting processing comprises the following steps:
and providing the decoding result and the frame inserting enhancement information to a frame inserting processing module at the terminal equipment side so as to carry out frame inserting processing by the frame inserting processing module.
5. The method of claim 4, wherein the step of determining the position of the first electrode is performed,
the frame inserting processing module comprises a system layer frame inserting chip in the terminal equipment;
the frame inserting processing module for providing the decoding result and the frame inserting enhancement information to the terminal equipment side comprises:
and providing the decoding result and the frame inserting enhancement information to the frame inserting chip by calling a system layer interface of the terminal equipment so that the frame inserting chip carries out frame inserting processing and obtains a frame inserting processing result.
6. The method of claim 5, wherein the step of determining the position of the probe is performed,
the server generates multiple pieces of frame inserting enhancement information aiming at the same target video, and the frame inserting enhancement information is respectively used for adapting to different operating systems and data protocols and formats required by different frame inserting chips carried in different terminal equipment;
The obtaining the video stream of the target video from the server side and the frame inserting enhancement information associated with the target video includes:
acquiring an operating system carried by the terminal equipment and frame inserting chip information;
and submitting a request for acquiring the video stream to the server, wherein the request carries the operating system and the frame inserting chip information, so that the server returns the video stream of the target video and the frame inserting enhancement information which is associated with the target video and is matched with the operating system and the frame inserting chip information.
7. The method of claim 4, wherein the step of determining the position of the first electrode is performed,
the frame inserting processing module comprises an application layer frame inserting processing module which is deployed in a programmable logic device of the terminal equipment;
the frame inserting processing module for providing the decoding result and the frame inserting enhancement information to the terminal equipment side comprises:
and providing the decoding result and the frame inserting enhancement information for an application layer frame inserting processing module at the terminal equipment side so that the application layer frame inserting processing module carries out frame inserting processing and obtains a frame inserting processing result.
8. A video processing method, comprising:
Performing scene shot segmentation on a target video, and respectively determining a proper frame inserting processing mode for each scene shot according to the content characteristics of a plurality of scene shots included in the target video and the conventional expression technical information of an creator of the target video when the creator creates the corresponding scene shot;
generating a plurality of pieces of frame inserting enhancement information for the target video according to the frame inserting processing modes respectively corresponding to the scene shots;
and when the video stream of the target video is provided for the client, the plurality of pieces of frame inserting enhancement information are also provided, so that after the client decodes the video stream, the terminal equipment side where the client is located decodes the video stream, and frame inserting processing is performed according to a decoding result and the frame inserting enhancement information, so that rendering and playing are performed according to a frame inserting processing result.
9. A computer readable storage medium, on which a computer program is stored, characterized in that the program, when being executed by a processor, implements the steps of the method according to any one of claims 1 to 8.
10. An electronic device, comprising:
one or more processors; and
a memory associated with the one or more processors for storing program instructions that, when read for execution by the one or more processors, perform the steps of the method of any of claims 1 to 8.
CN202210929019.9A 2022-08-03 2022-08-03 Video processing method, storage medium and electronic device Active CN115460436B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210929019.9A CN115460436B (en) 2022-08-03 2022-08-03 Video processing method, storage medium and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210929019.9A CN115460436B (en) 2022-08-03 2022-08-03 Video processing method, storage medium and electronic device

Publications (2)

Publication Number Publication Date
CN115460436A CN115460436A (en) 2022-12-09
CN115460436B true CN115460436B (en) 2023-10-20

Family

ID=84296890

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210929019.9A Active CN115460436B (en) 2022-08-03 2022-08-03 Video processing method, storage medium and electronic device

Country Status (1)

Country Link
CN (1) CN115460436B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010141437A (en) * 2008-12-09 2010-06-24 Fujitsu Ltd Frame interpolation device, method and program, frame rate conversion device, video image reproduction device, video image display device
CN111277895A (en) * 2018-12-05 2020-06-12 阿里巴巴集团控股有限公司 Video frame interpolation method and device
CN112839184A (en) * 2020-12-31 2021-05-25 深圳追一科技有限公司 Image processing method, image processing device, electronic equipment and storage medium

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8730232B2 (en) * 2011-02-01 2014-05-20 Legend3D, Inc. Director-style based 2D to 3D movie conversion system and method
US10412462B2 (en) * 2016-11-08 2019-09-10 Ati Technologies Ulc Video frame rate conversion using streamed metadata

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010141437A (en) * 2008-12-09 2010-06-24 Fujitsu Ltd Frame interpolation device, method and program, frame rate conversion device, video image reproduction device, video image display device
CN111277895A (en) * 2018-12-05 2020-06-12 阿里巴巴集团控股有限公司 Video frame interpolation method and device
CN112839184A (en) * 2020-12-31 2021-05-25 深圳追一科技有限公司 Image processing method, image processing device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN115460436A (en) 2022-12-09

Similar Documents

Publication Publication Date Title
CN106791893B (en) Video live broadcasting method and device
CN106911961B (en) Multimedia data playing method and device
KR101680714B1 (en) Method for providing real-time video and device thereof as well as server, terminal device, program, and recording medium
CN106911967B (en) Live broadcast playback method and device
WO2016192325A1 (en) Method and device for processing logo on video file
CN111107421B (en) Video processing method and device, terminal equipment and storage medium
CN109862380B (en) Video data processing method, device and server, electronic equipment and storage medium
CN106534951B (en) Video segmentation method and device
CN112153396B (en) Page display method, device, system and storage medium
CN112291631A (en) Information acquisition method, device, terminal and storage medium
CN111182328B (en) Video editing method, device, server, terminal and storage medium
CN112188230A (en) Virtual resource processing method and device, terminal equipment and server
CN109495765B (en) Video interception method and device
CN111835739A (en) Video playing method and device and computer readable storage medium
CA3102425C (en) Video processing method, device, terminal and storage medium
CN112685599B (en) Video recommendation method and device
CN107105311B (en) Live broadcasting method and device
CN112511779B (en) Video data processing method and device, computer storage medium and electronic equipment
CN107247794B (en) Topic guiding method in live broadcast, live broadcast device and terminal equipment
CN108574860B (en) Multimedia resource playing method and device
CN109831538B (en) Message processing method, device, server, terminal and medium
CN110769275B (en) Method, device and system for processing live data stream
CN115460436B (en) Video processing method, storage medium and electronic device
CN112312039A (en) Audio and video information acquisition method, device, equipment and storage medium
CN110858921A (en) Program video processing method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant