CN113079406A - Video processing method and device - Google Patents
Video processing method and device Download PDFInfo
- Publication number
- CN113079406A CN113079406A CN202110296439.3A CN202110296439A CN113079406A CN 113079406 A CN113079406 A CN 113079406A CN 202110296439 A CN202110296439 A CN 202110296439A CN 113079406 A CN113079406 A CN 113079406A
- Authority
- CN
- China
- Prior art keywords
- video
- code stream
- modified
- sub
- target
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 39
- 230000004048 modification Effects 0.000 claims abstract description 61
- 238000012986 modification Methods 0.000 claims abstract description 61
- 238000012545 processing Methods 0.000 claims abstract description 26
- 238000013507 mapping Methods 0.000 claims description 15
- 238000003860 storage Methods 0.000 claims description 9
- 238000000034 method Methods 0.000 description 38
- 230000008569 process Effects 0.000 description 28
- 238000010586 diagram Methods 0.000 description 5
- 230000009286 beneficial effect Effects 0.000 description 4
- 230000008859 change Effects 0.000 description 3
- 238000013144 data compression Methods 0.000 description 3
- 230000006837 decompression Effects 0.000 description 3
- 238000004458 analytical method Methods 0.000 description 2
- 238000004891 communication Methods 0.000 description 2
- 238000007906 compression Methods 0.000 description 2
- 230000006835 compression Effects 0.000 description 2
- 238000004590 computer program Methods 0.000 description 2
- 230000003247 decreasing effect Effects 0.000 description 2
- 230000007547 defect Effects 0.000 description 2
- 238000009826 distribution Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000009877 rendering Methods 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 238000012790 confirmation Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000008676 import Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 239000004984 smart glass Substances 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/434—Disassembling of a multiplex stream, e.g. demultiplexing audio and video streams, extraction of additional data from a video stream; Remultiplexing of multiplex streams; Extraction or processing of SI; Disassembling of packetised elementary stream
- H04N21/4347—Demultiplexing of several video streams
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/44—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
- H04N21/4402—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
- H04N21/440218—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display by transcoding between formats or standards, e.g. from MPEG-2 to MPEG-4
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/44—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
- H04N21/4402—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
- H04N21/440245—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display the reformatting operation being performed only on part of the stream, e.g. a region of the image or a time segment
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
Abstract
The embodiment of the application provides a video processing method and a video processing device, wherein the video processing method comprises the following steps: determining a sub video code stream to be modified in an initial video code stream, decoding a video frame in the sub video code stream to be modified to generate corresponding video decoding data, obtaining a coding configuration parameter according to the decoding parameter for decoding the video frame, initializing an encoder by using the coding configuration parameter, modifying the video decoding data to generate a corresponding modification result, coding the modification result by using the encoder to generate a corresponding coding result, taking the coding result as a target sub video code stream obtained by modifying the sub video code stream to be modified, and generating a modified target video code stream based on the target sub video code stream.
Description
Technical Field
The embodiment of the application relates to the technical field of computers, in particular to a video processing method. One or more embodiments of the present application also relate to a video processing apparatus, a computing device, and a computer-readable storage medium.
Background
With the rapid development of multimedia technology, the content of video streams is more and more abundant, the data volume of video streams is also more and more, and more users begin to stream or recommend commodities by shooting short videos.
In the actual application process, after a user initially shoots a video, the video may need to be processed for the second time before delivery, but in the current video later delivery process, due to the change of the customer's requirement, or the defects existing in the video processing process before delivery, the situation that the video content needs to be modified temporarily often occurs, and such modification is often high-frequency and tiny, but in the current modification scheme, the whole video content is often used as the minimum operation granularity, that is, no matter how small the video is modified, all operations need to be performed on the basis of the whole video, the time required to be spent is long, and the consumed resources are more, so that the video processing efficiency is low.
Disclosure of Invention
In view of the above, the present application provides a video processing method. One or more embodiments of the present application relate to a video processing apparatus, a computing device, and a computer-readable storage medium, so as to solve the technical defects in the prior art that, in a case where a partial content of a video material content needs to be modified with a whole video material content as a minimum operation granularity, all modification operations need to be performed on the whole video material content, and the modification efficiency is low and the resource consumption is large.
According to a first aspect of embodiments of the present application, there is provided a video processing method, including:
determining a sub video code stream to be modified in an initial video code stream, and decoding a video frame in the sub video code stream to be modified to generate corresponding video decoding data;
obtaining coding configuration parameters according to decoding parameters for decoding the video frame, and initializing an encoder by using the coding configuration parameters;
modifying the video decoding data to generate a corresponding modification result, and encoding the modification result through the encoder to generate a corresponding encoding result;
and taking the coding result as a target sub-video code stream obtained by modifying the sub-video code stream to be modified, and generating a modified target video code stream based on the target sub-video code stream.
According to a second aspect of embodiments of the present application, there is provided a video processing apparatus including:
the determining module is configured to determine a sub video code stream to be modified in an initial video code stream, and decode a video frame in the sub video code stream to be modified to generate corresponding video decoding data;
a parameter determination module configured to obtain an encoding configuration parameter according to a decoding parameter for decoding the video frame, and initialize an encoder using the encoding configuration parameter;
the encoding module is configured to modify the video decoding data, generate a corresponding modification result, and encode the modification result through the encoder to generate a corresponding encoding result;
and the generating module is configured to take the encoding result as a target sub-video code stream obtained by modifying the sub-video code stream to be modified, and generate a modified target video code stream based on the target sub-video code stream.
According to a third aspect of embodiments herein, there is provided a computing device comprising:
a memory and a processor;
the memory is configured to store computer-executable instructions and the processor is configured to execute the computer-executable instructions, wherein the processor implements the steps of the video processing method when executing the computer-executable instructions.
According to a fourth aspect of embodiments herein, there is provided a computer-readable storage medium storing computer-executable instructions that, when executed by a processor, implement the steps of the video processing method.
An embodiment of the application realizes a video processing method and a video processing device, wherein the video processing method comprises the steps of determining a to-be-modified sub video code stream in an initial video code stream, decoding a video frame in the to-be-modified sub video code stream to generate corresponding video decoding data, obtaining a coding configuration parameter according to a decoding parameter for decoding the video frame, initializing an encoder by using the coding configuration parameter, modifying the video decoding data to generate a corresponding modification result, coding the modification result by using the encoder to generate a corresponding coding result, using the coding result as a target sub video code stream obtained by modifying the to-be-modified sub video code stream, and generating a modified target video code stream based on the target sub video code stream.
According to the embodiment of the application, the video code stream is divided into the plurality of sub-video code streams, and the sub-video code streams are used as the minimum operation granularity, so that under the condition that partial content in the video code stream needs to be modified, only the content of the partial sub-video code stream in the video code stream needs to be modified, the modification efficiency is improved, and resources consumed in the modification process are saved.
Drawings
Fig. 1 is a flowchart of a video processing method according to an embodiment of the present application;
fig. 2 is a flowchart of a processing procedure of the video processing method applied in the video field according to an embodiment of the present application;
fig. 3 is a schematic structural diagram of a video processing apparatus according to an embodiment of the present application;
fig. 4 is a block diagram of a computing device according to an embodiment of the present application.
Detailed Description
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present application. This application is capable of implementation in many different ways than those herein set forth and of similar import by those skilled in the art without departing from the spirit of this application and is therefore not limited to the specific implementations disclosed below.
The terminology used in the one or more embodiments of the present application is for the purpose of describing particular embodiments only and is not intended to be limiting of the one or more embodiments of the present application. As used in one or more embodiments of the present application and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used in one or more embodiments of the present application refers to and encompasses any and all possible combinations of one or more of the associated listed items.
It will be understood that, although the terms first, second, etc. may be used herein in one or more embodiments of the present application to describe various information, these information should not be limited by these terms. These terms are only used to distinguish one type of information from another. For example, a first aspect may be termed a second aspect, and, similarly, a second aspect may be termed a first aspect, without departing from the scope of one or more embodiments of the present application. The word "if" as used herein may be interpreted as "at … …" or "when … …" or "in response to a determination", depending on the context.
First, the noun terms to which one or more embodiments of the present application relate are explained.
And (3) video later stage: after the shooting, recording and preliminary content production, the content is processed for the second time, generally comprising 3 stages of time axis editing, picture rendering and exporting.
H264/H265: ISO sets international standards for video data compression.
A video encoder: and software and hardware tools for compressing the video data according to a certain video data compression standard.
A video decoder: and software and hardware tools for decompressing the video data according to a certain video data compression standard.
Key frame: the compressed video data can only be decompressed from the location of the key frame if decompression is required, otherwise proper decompression cannot be guaranteed. The key frames and non-key frames are determined by the compression method. The key frame can be correctly decompressed without depending on any other frame, so that the key frame can be used as a starting point of decompression; non-key frames must rely on other frames being obtained before they can be properly decompressed.
GOP: group of pictures, in a compressed video stream, the content between two adjacent key frames, which is a GOP, is a segment of video content after compression, and the minimum granularity of operation is really possible.
SPS/PPS: the sequence parameter Set SPS and the Picture Parameter Set (PPS) are configuration data for guiding a video decoder in video data compressed by the H264/H265 standard.
In the present application, a video processing method is provided. One or more embodiments of the present application are also directed to a video processing apparatus, a computing device, and a computer-readable storage medium, which are described in detail in the following embodiments one by one.
The video processing method provided by the embodiment of the application can be applied to any field needing to process the video.
In specific implementation, the video in the embodiment of the present application may be presented on clients such as a large-scale video playing device, a game console, a desktop computer, a smart phone, a tablet computer, an MP3(Moving Picture Experts Group Audio Layer III, mpeg Audio Layer 3) player, an MP4(Moving Picture Experts Group Audio Layer IV, mpeg Audio Layer 4) player, a laptop portable computer, an e-book reader, and other display terminals.
Referring to fig. 1, fig. 1 shows a flowchart of a video processing method according to an embodiment of the present application, including the following steps:
step 102, determining a sub video code stream to be modified in an initial video code stream, and decoding a video frame in the sub video code stream to be modified to generate corresponding video decoding data.
In the embodiment of the application, along with the increasingly wide spread of the short videos and the increasing influence of the short videos, more and more users begin to conduct drainage or commodity recommendation and the like in a mode of shooting the short videos at present.
In the practical application process, after a user initially shoots a video, the video may need to be processed for the second time, that is, the later stage of the video generally includes a time axis clip, and 3 stages of picture rendering and exporting are included. Since the current video post-delivery is a process that often needs to be repeated, for example, in the delivery process, due to a change in demand, or a flaw of work of post-personnel before delivery, a situation that the video content needs to be temporarily modified often occurs, and such modification is often high-frequency and slight, but since the current modification scheme often uses the whole video content as the minimum operation granularity, that is, no matter how small the video is modified, all operations need to be performed on the basis of the whole video, the time required to be spent is long, and more resources need to be consumed, so that the video processing efficiency is low.
Based on this, the initial video code stream is divided into the plurality of sub-video code streams, and the sub-video code streams are used as the minimum operation granularity, so that only the content of part of the sub-video code streams in the initial video code stream needs to be modified under the condition that part of the content in the initial video code stream needs to be modified, thereby being beneficial to improving the modification efficiency and simultaneously being beneficial to saving resources consumed in the modification process.
Specifically, the initial video code stream is generated by encoding a video to be delivered.
In practical application, the video content of an original video is encoded and compressed by an encoder to obtain a corresponding video code stream, and the initial video code stream in the embodiment of the application is generated by encoding a video to be delivered; the sub-video code stream is a part of the initial video code stream, and specifically, each GOP picture group in the initial video code stream can be used as a sub-video code stream, and the GOP picture group to be modified is the sub-video code stream to be modified.
In specific implementation, a sub-video code stream to be modified in an initial video code stream is determined, that is, video playing time corresponding to a video frame to be modified in the initial video code stream is determined according to content to be modified, and the sub-video code stream to be modified is determined according to the video playing time.
Further, determining the sub-video code stream to be modified according to the video playing time includes:
acquiring a video key frame list, and dividing the initial video code stream into a plurality of picture groups according to the mapping relation between video frames in the video key frame list and video playing time, wherein the starting video frame and the ending video frame of each picture group are video key frames;
determining video playing time intervals corresponding to the plurality of picture groups respectively, and determining a target video playing time interval to which video playing time corresponding to the video frame to be modified belongs;
and determining a target picture group corresponding to the target video playing time as the sub-video stream to be modified.
Specifically, the video playing time is the playing time of the video frame in the video to be delivered, the video key frame list records information of the video key frame, specifically, the video frame corresponding to the video playing time recorded in the video key frame list is the video key frame, and therefore, which video frames are the video key frames can be determined according to the mapping relationship between the video playing time and the video frames in the video key frame list.
In the process of encoding video, a common encoding format is H264/H265, taking H264 as an example, H264 is interframe encoding, only changes between each video frame are recorded, and the encoding of video is performed according to "groups", each Group being a GOP (Group of Picture). Each GOP begins with a key frame (video key frame), one frame is a picture in the video, the key frame is a complete picture in the video, frames in the middle of the GOPs are non-key frames, the non-key frames are incomplete, when decoding is carried out, other frames are not needed to be referred, decoding can be carried out according to the key frame, the complete video frame is reconstructed, and the non-key frames are obtained by calculation of the key frame, a front frame, a rear frame and the like.
Therefore, the encoded initial video code stream is composed of a plurality of continuous picture groups, and in the process of encoding the video, a mapping relation can be established according to the key frames and the playing time of the key frames in the video, and a video key frame list of the video is generated according to the mapping relation.
After a video to be delivered is encoded to generate the initial video code stream, if a modification requirement for the initial video code stream exists, determining the content to be modified of the initial video code stream according to the modification requirement, further determining the video playing time corresponding to the video frame to be modified in the initial video code stream according to the content to be modified, and determining the sub-video code stream to be modified in the initial video code stream according to the video playing time.
The sub video code stream to be modified is a picture group in the initial video code stream, a video key frame list is provided in a general video file, the corresponding video playing time and the position of a key frame in the initial video code stream are corresponded, and under the condition that the video playing time is known, the corresponding key frame position can be obtained through the video key frame list.
Therefore, after the video playing time corresponding to the video frame to be modified is determined, the picture group where the video frame to be modified is located needs to be determined, specifically, the distribution condition of each key frame in the initial video code stream is determined by acquiring the video key frame list of the initial video code stream, according to the mapping relation between the video frame in the video key frame list and the video playing time, and the initial video code stream is divided into a plurality of picture groups according to the distribution condition.
After the division is completed, the video playing time interval corresponding to each picture group can be determined, the video playing time interval containing the video playing time corresponding to the video frame to be modified is used as a target video playing time interval, and the target picture group corresponding to the target video playing time is determined as the sub-video stream to be modified.
By dividing the video code stream into a plurality of sub-video code streams and taking the sub-video code streams as the minimum operation granularity, under the condition that the video frames in the video code streams need to be modified, only other video frames in the picture group (sub-video code streams) to which the video frames belong are required to be processed, and the video frames in the other sub-video code streams do not need to be operated, so that resources required to be consumed in the modification process are saved, and the modification efficiency is improved.
In addition, after dividing the initial video code stream into a plurality of picture groups, the method further includes:
determining video playing time intervals corresponding to the plurality of picture groups respectively, and determining at least two target video playing time intervals to which video playing time corresponding to the video frame to be modified belongs;
if any two or more target video playing time intervals are continuous in the at least two target video playing time intervals, combining the target picture groups corresponding to any two or more continuous target video playing time intervals, and taking the combined result as the sub-video code stream to be modified.
Specifically, a plurality of GOPs including the time range are located according to the modified time range, and if a plurality of involved GOPs are consecutive, the GOPs may be considered to be combined into 1 GOP process, specifically, after a plurality of to-be-modified picture Groups (GOPs) in the initial video stream are determined, if two or more GOPs are consecutive, two or more consecutive GOPs may be combined into 1 GOP process, specifically, after at least two target video playing time intervals are determined, if any two or more target video playing time intervals are consecutive, the target picture groups corresponding to any two or more consecutive target video playing time intervals are combined, and a combination result is used as the to-be-modified sub video stream.
The two or more continuous target picture groups are combined, and because the processing process of each target picture group is basically consistent when the video frames in each target picture group are processed, the processing process of each target picture group is equivalent to a complete cycle process, and the combination processing of the two or more continuous target picture groups is equivalent to the reduction of the cycle times, thereby being beneficial to reducing the time consumed by the modification process and further being beneficial to improving the video modification efficiency.
In specific implementation, before determining the sub-video code stream to be modified in the initial video code stream, the video key frame list may be generated first, and specifically, the method may be implemented in the following manner:
determining a video key frame of the initial video code stream according to the data packet characteristics of each video frame in the initial video code stream, and determining the video playing time of the video key frame in the initial video code stream;
and establishing a mapping relation between the video key frames and the video playing time, and generating the video key frame list.
Specifically, the determination of the sub-video code stream to be modified is realized by means of a mapping relationship between video key frames in a video key frame list and video playing time, so that before the determination of the sub-video code stream to be modified, the video key frame list can be constructed, specifically, a video file can be scanned, and the video key frames can be determined according to data packet characteristics of each video frame in the initial video code stream, wherein the data packet characteristics are equivalent to that of the video frames serving as identification information of the video key frames.
After the video key frame is determined, the video playing time of the video key frame in the initial video code stream can be determined, then the mapping relation between the video playing time and the video key frame is established, and a video key frame list is generated based on the mapping relation.
And determining the video key frames in the video code stream according to the mapping relation between the video playing time and the video key frames in the video key frame list, so that the accuracy of the target picture group confirmation result is ensured.
And 104, acquiring a coding configuration parameter according to the decoding parameter for decoding the video frame, and initializing an encoder by using the coding configuration parameter.
In specific implementation, the encoding configuration parameters are obtained according to the decoding parameters for decoding the video frame, that is, the decoding parameters for decoding the video frame are obtained, and the decoding parameters are analyzed to obtain the encoding configuration parameters for initializing the encoder.
Specifically, the encoding configuration parameters include, but are not limited to, a sequence parameter set, a picture parameter set, and the like.
As described above, after determining a plurality of groups of pictures (GOPs) to be modified in the initial video code stream, assuming that each GOP is named as a GOPx, after decoding each video frame in the GOPx, a corresponding decoder configuration may be extracted, that is, a decoding parameter for decoding the video frame is obtained.
Since the commonly used encoding format is H264 or H265, the related content can be located and extracted according to the syntax of SPS or PPS, that is, the extracted decoder configuration (decoding parameters) is parsed to obtain a Sequence Parameter Set (SPS) and a Picture Parameter Set (PPS), and the SPS and PPS include information parameters required for initializing the H264 decoder, such as profile and level used for encoding, the width and height of a picture, a deblocking filter, and the like, and therefore, the configuration parameters of the encoder for encoding the modified content are extracted, which generally include: the video frame rate, parameters related to the color space of the video, the sampling format of the video, the frame cropping position and size of the video, and other configuration parameters required by some standards.
After the coding configuration parameters are obtained through analysis, the coder can be initialized by utilizing the coding configuration parameters.
The method and the device for encoding the video decoding data determine encoding configuration parameters for initializing the encoder according to the decoding parameters for decoding the video frames, and the decoding parameters of the decoder are kept unchanged in the whole process of the whole video processing process, so that the compatibility of the decoder is improved.
And 106, modifying the video decoding data to generate a corresponding modification result, and encoding the modification result through the encoder to generate a corresponding encoding result.
Specifically, after the video decoding data is obtained by decoding, the video decoding data may be modified according to the information to be modified of the sub-video code stream to be modified, and the modification may be of a time axis, such as increasing or decreasing the content of a picture, or of the picture itself, such as color matching, adding a special effect, and the like.
By dividing the initial video code stream into a plurality of sub-video code streams and taking the sub-video code streams as the minimum operation granularity, under the condition that the video frames in the initial video code stream need to be modified, only other video frames in the picture group (sub-video code stream) to which the video frames belong are required to be processed, and the video frames in other sub-video code streams are not required to be operated, so that resources required to be consumed in the modification process are saved, and the modification efficiency is improved.
In practical application, the extracted encoding configuration parameters of the encoder for guiding content modification are used to initialize a video encoder, and assuming that the encoding configuration parameters are named as ENCx, for any one of the GOPx, the video decoding data (generally located before and after the content to be modified) which does not relate to modification in the video decoding data corresponding to the GOPx is kept unchanged, and only the video decoding data which relates to modification is modified, so as to obtain the modified video decoding data.
And inputting the modified video decoding data into a video encoder ENCX for encoding to obtain a new version of GOPx 1.
The above processing procedure for any one gop may be streaming, that is, pictures that do not need to be modified are directly sent to the encoder, and pictures that need to be modified are sent to the encoder after being modified.
Dividing an initial video code stream into a plurality of sub-video code streams, taking the sub-video code streams as the minimum operation granularity, determining a picture group to which a video frame to be modified belongs under the condition that the video frame in the initial video code stream needs to be modified, decoding the video frame in the picture group, modifying decoded video decoding data according to the modification requirement, and encoding the modified data by using an encoder; in the process, only other video frames in the picture group (sub video code stream) to which the video frame belongs need to be processed, and each video frame in other sub video code streams does not need to be operated, so that resources consumed in the modification process are saved, and the modification efficiency is improved.
And 108, taking the coding result as a target sub-video code stream obtained by modifying the sub-video code stream to be modified, and generating a modified target video code stream based on the target sub-video code stream.
In specific implementation, a modified target video code stream is generated based on the target sub-video stream, that is, the target sub-video code stream is used to replace the to-be-modified sub-video code stream of the initial video code stream, so as to generate a modified target video code stream.
Specifically, an initial video code stream is divided into a plurality of sub video code streams, only a plurality of sub video code streams to be modified in the plurality of sub video code streams are modified to obtain a modified target sub video code stream GOPx1, and a new GOPx1 is substituted for the original GOPx in the initial video code stream, so that the content of a video is seamlessly substituted, the modification requirement can be realized without modifying all video frames in the initial video code stream, the modification efficiency is improved, and the resources consumed in the modification process are reduced.
In addition, after generating the modified target video code stream based on the target sub-video stream, the method further includes:
determining a first video time length of the initial video code stream and a second video time length of the target video code stream;
judging whether the first video time length is consistent with the second video time length;
if not, acquiring audio data corresponding to the initial video code stream, and modifying the audio data according to the target video code stream.
Specifically, after the sub-video code stream to be modified is modified to obtain a modified target video code stream, whether the modification relates to the adjustment of the video time axis can be determined according to the video time lengths respectively corresponding to the target video code stream and the initial video code stream; if the video time lengths of the two are consistent, the adjustment of the video time axis is not involved, and if the video time lengths of the two are inconsistent, the adjustment of the video time axis is involved.
In practical application, if the picture content in the video code stream is increased or decreased, the video duration of the target video code stream is correspondingly increased or shortened relative to the initial video code stream; if only the picture itself is modified, such as color matching, special effects, etc., the video duration of the target video code stream will not change relative to the initial video code stream.
If the adjustment of the video time axis is not involved, combining the target video code stream and the audio content corresponding to the initial video code stream for delivery; if the adjustment of the video time axis is involved, the audio content corresponding to the initial video code stream also needs to be combined with the target video code stream for delivery after some adjustment.
An embodiment of the application realizes a video processing method, wherein the video processing method includes determining a to-be-modified sub video code stream in an initial video code stream, decoding a video frame in the to-be-modified sub video code stream to generate corresponding video decoding data, modifying the video decoding data according to a decoding parameter for decoding the video frame and to-be-modified information of the to-be-modified sub video code stream to generate a corresponding modification result, encoding the modification result to generate a corresponding encoding result, using the encoding result as a target sub video code stream obtained by modifying the to-be-modified sub video code stream, and generating a modified target video code stream based on the target sub video code stream.
According to the embodiment of the application, the video code stream is divided into the plurality of sub-video code streams, and the sub-video code streams are used as the minimum operation granularity, so that under the condition that partial content in the video code stream needs to be modified, only the content of the partial sub-video code stream in the video code stream needs to be modified, the modification efficiency is improved, and resources consumed in the modification process are saved.
Referring to fig. 2, the application of the video processing method provided in the embodiment of the present application in the video field is taken as an example to further describe the video processing method. Fig. 2 shows a flow chart of a processing procedure of a video processing method applied in the video field according to an embodiment of the present application, which specifically includes the following steps:
And 208, determining the content to be modified of the initial video code stream, and determining the video playing time corresponding to the video frame to be modified in the initial video code stream according to the content to be modified.
Step 210, determining a target video playing time interval to which the video playing time corresponding to the video frame to be modified belongs.
Step 212, determining the target picture group corresponding to the target video playing time as the sub-video stream to be modified.
Specifically, if any two or more target video playing time intervals are continuous in the target video playing time intervals, combining target picture groups corresponding to any two or more continuous target video playing time intervals, and taking a combination result as the sub-video code stream to be modified.
And 214, decoding the video frame in the sub video code stream to be modified to generate corresponding video decoding data.
Step 218, modifying the video decoding data according to the information to be modified of the sub video code stream to be modified, and generating a corresponding modification result.
Step 220, initializing a video encoder by using the sequence parameter set and the image parameter set, and encoding the modification result by using the video encoder to generate a corresponding encoding result.
Specifically, the coding result is used as a target sub video code stream obtained by modifying the sub video code stream to be modified;
In particular, if no adjustment of the time axis is involved, the content may be delivered at this time. If the time axis adjustment is involved, the corresponding audio content also needs to be combined with the new video for delivery after some adjustment.
According to the embodiment of the application, the video code stream is divided into the plurality of sub-video code streams, and the sub-video code streams are used as the minimum operation granularity, so that under the condition that partial content in the video code stream needs to be modified, only the content of the partial sub-video code stream in the video code stream needs to be modified, the modification efficiency is improved, and resources consumed in the modification process are saved.
Corresponding to the above method embodiment, the present application further provides an embodiment of a video processing apparatus, and fig. 3 shows a schematic structural diagram of a video processing apparatus according to an embodiment of the present application. As shown in fig. 3, the apparatus includes:
a determining module 302, configured to determine a to-be-modified sub video code stream in an initial video code stream, and decode a video frame in the to-be-modified sub video code stream to generate corresponding video decoding data;
a modification module 304, configured to modify the video decoding data according to the decoding parameters for decoding the video frame and the information to be modified of the sub-video code stream to be modified, and generate a corresponding modification result;
the encoding module 306 is configured to encode the modification result to generate a corresponding encoding result, and use the encoding result as a target sub-video code stream obtained by modifying the sub-video code stream to be modified;
a generating module 308 configured to generate a modified target video code stream based on the target sub-video stream.
Optionally, the determining module 302 includes:
and the determining submodule is configured to determine video playing time corresponding to the video frame to be modified in the initial video code stream according to the content to be modified, and determine the sub-video code stream to be modified according to the video playing time.
Optionally, the determining sub-module includes:
the dividing unit is configured to acquire a video key frame list, and divide the initial video code stream into a plurality of picture groups according to the mapping relation between the video frames in the video key frame list and the video playing time, wherein the starting video frame and the ending video frame of each picture group are video key frames;
the first determining unit is configured to determine video playing time intervals corresponding to the multiple picture groups respectively, and determine a target video playing time interval to which video playing time corresponding to the video frame to be modified belongs;
and the second determining unit is configured to determine the target picture group corresponding to the target video playing time as the sub-video stream to be modified.
Optionally, the determining sub-module further includes:
a third determining unit, configured to determine video playing time intervals corresponding to the multiple groups of pictures respectively, and determine at least two target video playing time intervals to which video playing times corresponding to the video frames to be modified belong;
and the merging unit is configured to merge target picture groups corresponding to any two or more continuous target video playing time intervals if any two or more continuous target video playing time intervals exist in the at least two target video playing time intervals, and take a merging result as the to-be-modified sub-video code stream.
Optionally, the video processing apparatus further includes:
the video key frame determining module is configured to determine a video key frame of the initial video code stream according to the data packet characteristics of each video frame in the initial video code stream, and determine the video playing time of the video key frame in the initial video code stream;
and the establishing module is configured to establish a mapping relation between the video key frames and the video playing time and generate the video key frame list.
Optionally, the modifying module 304 includes:
the analysis sub-module is configured to acquire decoding parameters for decoding the video frame, analyze the decoding parameters, and acquire a sequence parameter set and an image parameter set for initializing an encoder;
and the modification sub-module is configured to modify the video decoding data according to the information to be modified of the sub-video code stream to be modified, the sequence parameter set and the image parameter set.
Optionally, the encoding module 306 includes:
and the coding sub-module is configured to initialize a video coder by using the sequence parameter set and the image parameter set, and code the modification result by using the video coder to generate a corresponding coding result.
Optionally, the generating module 308 includes:
and the generation submodule is configured to replace the to-be-modified sub video code stream of the initial video code stream by using the target sub video code stream, and generate a modified target video code stream.
Optionally, the video processing apparatus further includes:
a video duration determination module configured to determine a first video duration of the initial video stream and a second video duration of the target video stream;
the judging module is configured to judge whether the first video time length is consistent with the second video time length;
if the operation result of the judgment module is negative, the audio data modification module is operated;
and the audio data modification module is configured to acquire audio data corresponding to the initial video code stream and modify the audio data according to the target video code stream.
According to the embodiment of the application, the video code stream is divided into the plurality of sub-video code streams, and the sub-video code streams are used as the minimum operation granularity, so that under the condition that partial content in the video code stream needs to be modified, only the content of the partial sub-video code stream in the video code stream needs to be modified, the modification efficiency is improved, and resources consumed in the modification process are saved.
The above is a schematic scheme of a video processing apparatus of the present embodiment. It should be noted that the technical solution of the video processing apparatus belongs to the same concept as the technical solution of the video processing method, and details that are not described in detail in the technical solution of the video processing apparatus can be referred to the description of the technical solution of the video processing method.
FIG. 4 illustrates a block diagram of a computing device 400 provided according to an embodiment of the present application. The components of the computing device 400 include, but are not limited to, a memory 410 and a processor 420. Processor 420 is coupled to memory 410 via bus 430 and database 450 is used to store data.
Computing device 400 also includes access device 440, access device 440 enabling computing device 400 to communicate via one or more networks 460. Examples of such networks include the Public Switched Telephone Network (PSTN), a Local Area Network (LAN), a Wide Area Network (WAN), a Personal Area Network (PAN), or a combination of communication networks such as the internet. The access device 440 may include one or more of any type of network interface (e.g., a Network Interface Card (NIC)) whether wired or wireless, such as an IEEE802.11 Wireless Local Area Network (WLAN) wireless interface, a worldwide interoperability for microwave access (Wi-MAX) interface, an ethernet interface, a Universal Serial Bus (USB) interface, a cellular network interface, a bluetooth interface, a Near Field Communication (NFC) interface, and so forth.
In one embodiment of the application, the above-described components of computing device 400 and other components not shown in FIG. 4 may also be connected to each other, such as by a bus. It should be understood that the block diagram of the computing device architecture shown in FIG. 4 is for purposes of example only and is not limiting as to the scope of the present application. Those skilled in the art may add or replace other components as desired.
Computing device 400 may be any type of stationary or mobile computing device, including a mobile computer or mobile computing device (e.g., tablet, personal digital assistant, laptop, notebook, netbook, etc.), mobile phone (e.g., smartphone), wearable computing device (e.g., smartwatch, smartglasses, etc.), or other type of mobile device, or a stationary computing device such as a desktop computer or PC. Computing device 400 may also be a mobile or stationary server.
Wherein processor 420 is configured to execute computer-executable instructions for executing the computer-executable instructions, wherein the steps of the video processing method are implemented when the processor executes the computer-executable instructions.
The above is an illustrative scheme of a computing device of the present embodiment. It should be noted that the technical solution of the computing device and the technical solution of the video processing method belong to the same concept, and details that are not described in detail in the technical solution of the computing device can be referred to the description of the technical solution of the video processing method.
An embodiment of the present application also provides a computer-readable storage medium storing computer-executable instructions, which when executed by a processor, implement the steps of the video processing method.
The above is an illustrative scheme of a computer-readable storage medium of the present embodiment. It should be noted that the technical solution of the storage medium belongs to the same concept as the technical solution of the above-mentioned video processing method, and details that are not described in detail in the technical solution of the storage medium can be referred to the description of the technical solution of the above-mentioned video processing method.
The foregoing description of specific embodiments of the present application has been presented. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims may be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing may also be possible or may be advantageous.
The computer instructions comprise computer program code which may be in the form of source code, object code, an executable file or some intermediate form, or the like. The computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution medium, and the like. It should be noted that the computer readable medium may contain content that is subject to appropriate increase or decrease as required by legislation and patent practice in jurisdictions, for example, in some jurisdictions, computer readable media does not include electrical carrier signals and telecommunications signals as is required by legislation and patent practice.
It should be noted that, for the sake of simplicity, the foregoing method embodiments are described as a series of acts or combinations, but those skilled in the art should understand that the present application embodiment is not limited by the described acts or sequences, because some steps may be performed in other sequences or simultaneously according to the present application embodiment. Further, those skilled in the art should also appreciate that the embodiments described in the specification are preferred embodiments and that acts and modules referred to are not necessarily required to implement the embodiments of the application.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
The preferred embodiments of the present application disclosed above are intended only to aid in the explanation of the application. Alternative embodiments are not exhaustive and do not limit the invention to the precise embodiments described. Obviously, many modifications and variations are possible in light of the above teaching. The embodiments were chosen and described in order to best explain the principles of the embodiments of the application and its practical application, to thereby enable others skilled in the art to best understand and utilize the application. The application is limited only by the claims and their full scope and equivalents.
Claims (11)
1. A video processing method, comprising:
determining a sub video code stream to be modified in an initial video code stream, and decoding a video frame in the sub video code stream to be modified to generate corresponding video decoding data;
obtaining coding configuration parameters according to decoding parameters for decoding the video frame, and initializing an encoder by using the coding configuration parameters;
modifying the video decoding data to generate a corresponding modification result, and encoding the modification result through the encoder to generate a corresponding encoding result;
and taking the coding result as a target sub-video code stream obtained by modifying the sub-video code stream to be modified, and generating a modified target video code stream based on the target sub-video code stream.
2. The video processing method according to claim 1, wherein said determining the sub-video stream to be modified in the initial video stream comprises:
and determining video playing time corresponding to the video frame to be modified in the initial video code stream according to the content to be modified, and determining the sub-video code stream to be modified according to the video playing time.
3. The video processing method according to claim 2, wherein said determining the sub-video stream to be modified according to the video playing time comprises:
acquiring a video key frame list, and dividing the initial video code stream into a plurality of picture groups according to the mapping relation between video frames in the video key frame list and video playing time, wherein the starting video frame and the ending video frame of each picture group are video key frames;
determining video playing time intervals corresponding to the plurality of picture groups respectively, and determining a target video playing time interval to which video playing time corresponding to the video frame to be modified belongs;
and determining a target picture group corresponding to the target video playing time as the sub-video stream to be modified.
4. The video processing method of claim 3, wherein after dividing the initial video code stream into a plurality of groups of pictures, further comprising:
determining video playing time intervals corresponding to the plurality of picture groups respectively, and determining at least two target video playing time intervals to which video playing time corresponding to the video frame to be modified belongs;
if any two or more target video playing time intervals are continuous in the at least two target video playing time intervals, combining the target picture groups corresponding to any two or more continuous target video playing time intervals, and taking the combined result as the sub-video code stream to be modified.
5. The video processing method according to any of claims 1 to 4, wherein before determining the sub-video stream to be modified in the initial video stream, further comprising:
determining a video key frame of the initial video code stream according to the data packet characteristics of each video frame in the initial video code stream, and determining the video playing time of the video key frame in the initial video code stream;
and establishing a mapping relation between the video key frames and the video playing time, and generating the video key frame list.
6. The video processing method of claim 1, wherein obtaining the encoding configuration parameters according to the decoding parameters for decoding the video frame comprises:
and acquiring decoding parameters for decoding the video frame, analyzing the decoding parameters, and acquiring coding configuration parameters for initializing a coder.
7. The video processing method according to claim 1, wherein said generating a modified target video bitstream based on the target sub-video stream comprises:
and replacing the to-be-modified sub video code stream of the initial video code stream by using the target sub video code stream to generate a modified target video code stream.
8. The video processing method according to claim 1, wherein after generating the modified target video code stream based on the target sub-video stream, further comprising:
determining a first video time length of the initial video code stream and a second video time length of the target video code stream;
judging whether the first video time length is consistent with the second video time length;
if not, acquiring audio data corresponding to the initial video code stream, and modifying the audio data according to the target video code stream.
9. A video processing apparatus, comprising:
the determining module is configured to determine a sub video code stream to be modified in an initial video code stream, and decode a video frame in the sub video code stream to be modified to generate corresponding video decoding data;
a parameter determination module configured to obtain an encoding configuration parameter according to a decoding parameter for decoding the video frame, and initialize an encoder using the encoding configuration parameter;
the encoding module is configured to modify the video decoding data, generate a corresponding modification result, and encode the modification result through the encoder to generate a corresponding encoding result;
and the generating module is configured to take the encoding result as a target sub-video code stream obtained by modifying the sub-video code stream to be modified, and generate a modified target video code stream based on the target sub-video code stream.
10. A computing device, comprising:
a memory and a processor;
the memory is configured to store computer-executable instructions and the processor is configured to execute the computer-executable instructions, wherein the processor implements the steps of the video processing method according to any one of claims 1 to 8 when executing the computer-executable instructions.
11. A computer-readable storage medium, characterized in that it stores computer instructions which, when executed by a processor, implement the steps of the video processing method according to any one of claims 1 to 8.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110296439.3A CN113079406A (en) | 2021-03-19 | 2021-03-19 | Video processing method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110296439.3A CN113079406A (en) | 2021-03-19 | 2021-03-19 | Video processing method and device |
Publications (1)
Publication Number | Publication Date |
---|---|
CN113079406A true CN113079406A (en) | 2021-07-06 |
Family
ID=76612852
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110296439.3A Pending CN113079406A (en) | 2021-03-19 | 2021-03-19 | Video processing method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113079406A (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113784209A (en) * | 2021-09-03 | 2021-12-10 | 上海哔哩哔哩科技有限公司 | Multimedia data stream processing method and device |
CN113973229A (en) * | 2021-08-11 | 2022-01-25 | 上海卓越睿新数码科技股份有限公司 | Online editing method for processing misstatement in video |
CN115545847A (en) * | 2022-10-21 | 2022-12-30 | 深圳市凯盛浩科技有限公司 | Commodity identification and search method and system based on video stream |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101785298A (en) * | 2007-08-09 | 2010-07-21 | 皇家飞利浦电子股份有限公司 | Method and device for creating a modified video from an input video |
CN101859585A (en) * | 2010-07-01 | 2010-10-13 | 福建省三奥信息科技有限公司 | System and method for frame-accuracy cutting of video material |
CN104185077A (en) * | 2014-09-12 | 2014-12-03 | 飞狐信息技术(天津)有限公司 | Video editing method and device |
CN106534971A (en) * | 2016-12-05 | 2017-03-22 | 腾讯科技(深圳)有限公司 | Audio/ video clipping method and device |
CN106803992A (en) * | 2017-02-14 | 2017-06-06 | 北京时间股份有限公司 | Video clipping method and device |
CN107484004A (en) * | 2017-07-24 | 2017-12-15 | 北京奇艺世纪科技有限公司 | A kind of method for processing video frequency and device |
CN109525901A (en) * | 2018-11-27 | 2019-03-26 | Oppo广东移动通信有限公司 | Method for processing video frequency, device, electronic equipment and computer-readable medium |
JP2020108032A (en) * | 2018-12-27 | 2020-07-09 | 日本放送協会 | Video code stream editing device and program |
CN111885416A (en) * | 2020-07-17 | 2020-11-03 | 北京来也网络科技有限公司 | Audio and video correction method, device, medium and computing equipment |
-
2021
- 2021-03-19 CN CN202110296439.3A patent/CN113079406A/en active Pending
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101785298A (en) * | 2007-08-09 | 2010-07-21 | 皇家飞利浦电子股份有限公司 | Method and device for creating a modified video from an input video |
CN101859585A (en) * | 2010-07-01 | 2010-10-13 | 福建省三奥信息科技有限公司 | System and method for frame-accuracy cutting of video material |
CN104185077A (en) * | 2014-09-12 | 2014-12-03 | 飞狐信息技术(天津)有限公司 | Video editing method and device |
CN106534971A (en) * | 2016-12-05 | 2017-03-22 | 腾讯科技(深圳)有限公司 | Audio/ video clipping method and device |
CN106803992A (en) * | 2017-02-14 | 2017-06-06 | 北京时间股份有限公司 | Video clipping method and device |
CN107484004A (en) * | 2017-07-24 | 2017-12-15 | 北京奇艺世纪科技有限公司 | A kind of method for processing video frequency and device |
CN109525901A (en) * | 2018-11-27 | 2019-03-26 | Oppo广东移动通信有限公司 | Method for processing video frequency, device, electronic equipment and computer-readable medium |
JP2020108032A (en) * | 2018-12-27 | 2020-07-09 | 日本放送協会 | Video code stream editing device and program |
CN111885416A (en) * | 2020-07-17 | 2020-11-03 | 北京来也网络科技有限公司 | Audio and video correction method, device, medium and computing equipment |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113973229A (en) * | 2021-08-11 | 2022-01-25 | 上海卓越睿新数码科技股份有限公司 | Online editing method for processing misstatement in video |
CN113973229B (en) * | 2021-08-11 | 2023-12-29 | 上海卓越睿新数码科技股份有限公司 | Online editing method for processing mouth errors in video |
CN113784209A (en) * | 2021-09-03 | 2021-12-10 | 上海哔哩哔哩科技有限公司 | Multimedia data stream processing method and device |
CN113784209B (en) * | 2021-09-03 | 2023-11-21 | 上海哔哩哔哩科技有限公司 | Multimedia data stream processing method and device |
CN115545847A (en) * | 2022-10-21 | 2022-12-30 | 深圳市凯盛浩科技有限公司 | Commodity identification and search method and system based on video stream |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP7011031B2 (en) | Chroma prediction method and device | |
CN113079406A (en) | Video processing method and device | |
US10362313B2 (en) | Video encoding method and video encoding for signaling SAO parameters | |
TWI692245B (en) | Video decoding apparatus, video encoding method and apparatus, and computer-readable storage medium | |
CN110198492B (en) | Video watermark adding method, device, equipment and storage medium | |
RU2370906C2 (en) | Method and device for editing of video fragments in compressed area | |
Gao et al. | Recent standard development activities on video coding for machines | |
CN105432083A (en) | Hybrid backward-compatible signal encoding and decoding | |
TW201836355A (en) | Video decoding method | |
CN107404648B (en) | A kind of multi-channel video code-transferring method based on HEVC | |
US9167274B1 (en) | Generating synchronized dictionaries for sparse coding | |
WO2021057697A1 (en) | Video encoding and decoding methods and apparatuses, storage medium, and electronic device | |
US20240114147A1 (en) | Systems, methods and bitstream structure for hybrid feature video bitstream and decoder | |
TWI559751B (en) | Methods, systems, and computer program products for assessing a macroblock candidate for conversion to a skipped macroblock | |
WO2024072865A1 (en) | Systems and methods for object boundary merging, splitting, transformation and background processing in video packing | |
US20240323453A1 (en) | A method or an apparatus for estimating film grain parameters | |
CN114419203A (en) | File processing method and device | |
JP7532362B2 (en) | Image processing device and method | |
CN114885178A (en) | Extremely-low-bit-rate face video hybrid compression method and system based on bidirectional frame prediction | |
CN114302175A (en) | Video processing method and device | |
CN114079823A (en) | Video rendering method, device, equipment and medium based on Flutter | |
WO2024149392A1 (en) | Method, apparatus, and medium for visual data processing | |
WO2023160717A1 (en) | Method, apparatus, and medium for video processing | |
CN112492358B (en) | Screen projection method and device, computer equipment and storage medium | |
WO2024083202A1 (en) | Method, apparatus, and medium for visual data processing |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20210706 |