CN113099272A - Video processing method and device, electronic equipment and storage medium - Google Patents

Video processing method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN113099272A
CN113099272A CN202110390296.2A CN202110390296A CN113099272A CN 113099272 A CN113099272 A CN 113099272A CN 202110390296 A CN202110390296 A CN 202110390296A CN 113099272 A CN113099272 A CN 113099272A
Authority
CN
China
Prior art keywords
video frame
video
frame
target video
video stream
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110390296.2A
Other languages
Chinese (zh)
Inventor
沈楷博
周浩
汪文轩
张林洋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Sensetime Intelligent Technology Co Ltd
Original Assignee
Shanghai Sensetime Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Sensetime Intelligent Technology Co Ltd filed Critical Shanghai Sensetime Intelligent Technology Co Ltd
Priority to CN202110390296.2A priority Critical patent/CN113099272A/en
Publication of CN113099272A publication Critical patent/CN113099272A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/60Network structure or processes for video distribution between server and client or between remote clients; Control signalling between clients, server and network components; Transmission of management data between server and client, e.g. sending from server to client commands for recording incoming content stream; Communication details between server and client 
    • H04N21/63Control signaling related to video distribution between client, server and network components; Network processes for video distribution between server and clients or between remote clients, e.g. transmitting basic layer and enhancement layers over different transmission paths, setting up a peer-to-peer communication via Internet between remote STB's; Communication protocols; Addressing
    • H04N21/647Control signaling between network components and server or clients; Network processes for video distribution between server and clients, e.g. controlling the quality of the video stream, by dropping packets, protecting content from unauthorised alteration within the network, monitoring of network load, bridging between two different networks, e.g. between IP and wireless
    • H04N21/64784Data processing by the network
    • H04N21/64792Controlling the complexity of the content stream, e.g. by dropping packets
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs
    • H04N21/2343Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/4402Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display

Abstract

The present disclosure relates to a video processing method and apparatus, an electronic device, and a storage medium, the method including: acquiring a video message of a video stream, wherein the video message comprises a data packet corresponding to at least one video frame in the video stream; based on the video message, sequentially detecting the loss information of the data packets corresponding to the video frames in the video stream; judging whether the target video frame meets a video frame removal condition or not according to the loss information corresponding to the currently detected target video frame; and under the condition that the target video frame meets the video frame removal condition, removing at least one video frame corresponding to the target video frame from the video stream to obtain the processed video stream. The video playing method and the video playing device can meet the video playing requirement for displaying clear pictures in the scene with poor network conditions.

Description

Video processing method and device, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to a video processing method and apparatus, an electronic device, and a storage medium.
Background
In a scene of the internet of things, for example, in a security scene, there may be situations in which a transmitted video packet has packet loss, disorder, and the like due to unstable network transmission, large network fluctuation, and the like.
Disclosure of Invention
The present disclosure proposes a technical solution for video processing.
According to an aspect of the present disclosure, there is provided a video processing method including: acquiring a video message of a video stream, wherein the video message comprises a data packet corresponding to at least one video frame in the video stream; based on the video message, sequentially detecting loss information of data packets corresponding to video frames in the video stream; judging whether the target video frame meets a video frame removal condition or not according to loss information corresponding to the currently detected target video frame; and under the condition that the target video frame meets a video frame removal condition, removing at least one video frame corresponding to the target video frame from the video stream to obtain a processed video stream. By the method, the video playing requirement of displaying clear pictures in a scene with poor network condition can be met.
In a possible implementation manner, the determining, according to the loss information corresponding to the currently detected target video frame, whether the target video frame meets a video frame removal condition includes: and under the condition that the loss number of the data packets corresponding to the target video frame is greater than or equal to a first number threshold value, determining that the target video frame meets a video frame removal condition. By the method, whether the target video frame meets the video frame removal condition or not can be determined conveniently.
In a possible implementation manner, the determining, according to the loss information corresponding to the currently detected target video frame, whether the target video frame meets a video frame removal condition includes: determining the number of data packets which are continuously lost by the target video frame according to the sequence identification of the lost data packets corresponding to the target video frame; determining that the target video frame satisfies a video frame removal condition if the number of data packets continuously lost by the target video frame is greater than or equal to a second number threshold. By the method, the number of the continuously lost data packets can be determined based on the sequence identification of the lost data packets, and whether the target video frame meets the video frame removal condition or not can be effectively determined based on the number of the continuously lost data packets.
In a possible implementation manner, the removing at least one video frame corresponding to the target video frame from the video stream to obtain a processed video stream includes: under the condition that the target video frame is a key frame, removing a first picture group where the target video frame is located from the video stream to obtain a processed video stream, wherein the first picture group comprises the target video frame and a non-key frame of which the time sequence is behind the target video frame; or, under the condition that the target video frame is a non-key frame, removing the target video frame from the video stream to obtain a processed video stream. By the method, corresponding video frame removal operation can be respectively carried out on the target video frames of the key frames and the non-key frames, so that the image quality of the decoded and played video frames is high in definition.
In a possible implementation manner, the removing at least one video frame corresponding to the target video frame from the video stream to obtain a processed video stream further includes: when the target video frame is a non-key frame, judging whether N non-key frames with time sequences being continuous before the target video frame are removed from the video stream or not, wherein N is a positive integer; in the case that the previous consecutive N non-key frames have been removed from the video stream, removing the target video frame from the video stream, and removing non-key frames in the second group of pictures in which the target video frame is located that are temporally subsequent to the target video frame. By the method, the corresponding video frame removing operation can be performed on the video frames continuously meeting the video frame removing condition, so that the image quality of the decoded and played video frames is high in definition.
In a possible implementation manner, the determining whether the target video frame meets a video frame removal condition according to the loss information corresponding to the currently detected target video frame includes: determining that the target video frame satisfies a video frame removal condition if the loss information indicates that the target video frame is lost; wherein the removing at least one video frame corresponding to the target video frame from the video stream to obtain a processed video stream includes: and removing a third picture group where the target video frame is located from the video stream to obtain a processed video stream, wherein the third picture group comprises a key frame corresponding to the target video frame and a non-key frame which is chronologically behind the key frame corresponding to the target video frame. By the method, under the condition that the whole video frame is lost, the played video frame can be a frame meeting the definition requirement by removing the third frame group where the target video frame with all lost data packets is located.
In a possible implementation manner, acquiring a video packet of a video stream includes: caching the video message of the video stream in a cache space; the method further comprises the following steps: sequentially detecting the out-of-order quantity of data packets corresponding to video frames in the video stream based on the video message; and adjusting the capacity of the cache space according to the loss number and/or the out-of-order number of the data packets detected in a preset time period to obtain the adjusted cache space, wherein the loss information of the data packets comprises the loss number of the data packets. By the mode, the capacity of the buffer queue can be adjusted according to different network conditions, the buffer queue with variable capacity is realized, the video playing quality can be optimized in time conveniently, and the low-delay playing effect can be realized.
In a possible implementation manner, adjusting the capacity of the cache space according to the number of lost packets and/or the number of out-of-order packets detected within a preset time period to obtain an adjusted cache space includes: under the condition that the loss number and/or the disorder number exceed a preset threshold, increasing the capacity of the cache space according to a preset amplification factor to obtain an adjusted cache space; or, under the condition that the loss number and/or the disorder number do not exceed the preset threshold, reducing the capacity of the cache space according to a preset reduction multiple to obtain the adjusted cache space. Through the mode, the capacity of the cache space can be correspondingly adjusted based on the preset threshold, the preset magnification factor and the preset reduction factor, so that the video playing requirements under different network conditions can be effectively met.
In one possible implementation, the method further includes: and under the condition that the data packets corresponding to the target video frame have the disordered data packets, sequencing the disordered data packets according to the sequence identification of the data packets corresponding to the target video frame to obtain the processed video stream. By the method, the sequence of the video frames in the processed video stream is correct, or the sequence of the data packets in the video message is correct, so that the picture content played by the video is correct, and a better playing effect is obtained.
In one possible implementation, the method further includes: analyzing the coding rate of the video stream and the coding format of the video stream based on the video message; and sending the processed video stream to a video decoder corresponding to the coding format according to the coding rate to obtain and play the decoded video stream. By the method, the processed video stream can be effectively played based on the coding code rate and the coding format.
According to an aspect of the present disclosure, there is provided a video processing apparatus including: the system comprises an acquisition module, a processing module and a processing module, wherein the acquisition module is used for acquiring a video message of a video stream, and the video message comprises a data packet corresponding to at least one video frame in the video stream; a detection module, configured to sequentially detect, based on the video packet, loss information of data packets corresponding to video frames in the video stream; the judging module is used for judging whether the target video frame meets a video frame removing condition or not according to the loss information corresponding to the currently detected target video frame; and the processing module is used for removing at least one video frame corresponding to the target video frame from the video stream under the condition that the target video frame meets a video frame removing condition to obtain a processed video stream.
In a possible implementation manner, the loss information of the data packet includes a number of lost data packets, where the determining module includes: the first determining submodule is used for determining that the target video frame meets a video frame removing condition under the condition that the loss number of the data packets corresponding to the target video frame is greater than or equal to a first number threshold value.
In a possible implementation manner, the loss information of the data packet includes a sequence identifier of the lost data packet, where the determining module includes: the data determining submodule is used for determining the number of data packets continuously lost by the target video frame according to the sequence identification of the lost data packets corresponding to the target video frame; and the second determining submodule is used for determining that the target video frame meets a video frame removal condition under the condition that the number of the data packets continuously lost by the target video frame is greater than or equal to a second number threshold.
In one possible implementation manner, the processing module includes: a first removing submodule, configured to remove, from the video stream, a first picture group in which the target video frame is located when the target video frame is a key frame, to obtain a processed video stream, where the first picture group includes the target video frame and a non-key frame whose timing sequence is after the target video frame; or, the second removing submodule is configured to remove the target video frame from the video stream to obtain a processed video stream when the target video frame is a non-key frame.
In a possible implementation manner, the processing module further includes: the judgment sub-module is used for judging whether N non-key frames with time sequences being continuous before the target video frame are removed from the video stream or not under the condition that the target video frame is a non-key frame, wherein N is a positive integer; a third removing sub-module, configured to remove the target video frame from the video stream and remove a non-key frame that is chronologically subsequent to the target video frame in a second group of pictures in which the target video frame is located, if the previous consecutive N non-key frames have been removed from the video stream.
In a possible implementation manner, the determining module includes: a third determining sub-module, configured to determine that the target video frame meets a video frame removal condition if the loss information indicates that the target video frame is lost; wherein the processing module comprises: and the fourth removing submodule is used for removing a third picture group where the target video frame is located from the video stream to obtain a processed video stream, wherein the third picture group comprises a key frame corresponding to the target video frame and a non-key frame behind the key frame corresponding to the target video frame in time sequence.
In a possible implementation manner, acquiring a video packet of a video stream includes: caching the video message of the video stream in a cache space; the device further comprises: the disorder detection module is used for sequentially detecting the disorder quantity of data packets corresponding to the video frames in the video stream based on the video message; and the capacity adjusting module is used for adjusting the capacity of the cache space according to the loss quantity and/or the out-of-order quantity of the data packets detected in the preset time period to obtain the adjusted cache space, wherein the loss information of the data packets comprises the loss quantity of the data packets.
In one possible implementation, the capacity adjustment module includes: the first adjusting submodule is used for increasing the capacity of the cache space according to a preset amplification factor under the condition that the loss quantity and/or the disorder quantity exceed a preset threshold value, so that the adjusted cache space is obtained; or, the second adjusting submodule is configured to reduce the capacity of the cache space according to a preset reduction multiple under the condition that the loss number and/or the disorder number do not exceed the preset threshold, so as to obtain an adjusted cache space.
In one possible implementation, the apparatus further includes: and the sequencing module is used for sequencing the out-of-order data packets according to the sequence identification of the data packets corresponding to the target video frame under the condition that the out-of-order data packets exist in the data packets corresponding to the target video frame, so as to obtain the processed video stream.
In one possible implementation, the apparatus further includes: the analysis module is used for analyzing the coding rate of the video stream and the coding format of the video stream based on the video message; and the decoding module is used for sending the processed video stream to a video decoder corresponding to the coding format according to the coding code rate to obtain and play the decoded video stream.
According to an aspect of the present disclosure, there is provided an electronic device including: a processor; a memory for storing processor-executable instructions; wherein the processor is configured to invoke the memory-stored instructions to perform the above-described method.
According to an aspect of the present disclosure, there is provided a computer readable storage medium having stored thereon computer program instructions which, when executed by a processor, implement the above-described method.
According to the embodiment of the disclosure, when it is determined that the target video frame meets the video frame removal condition based on the loss information of the data packet of the target video frame, the corresponding video frame in the video stream is removed, so that when the processed video stream is played, the video picture is clear, and the video playing requirement of displaying the clear picture in a scene with a poor network condition is met.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure. Other features and aspects of the present disclosure will become apparent from the following detailed description of exemplary embodiments, which proceeds with reference to the accompanying drawings.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the principles of the disclosure.
Fig. 1 shows a flow chart of a video playing method according to an embodiment of the present disclosure.
Fig. 2 shows a schematic structural diagram of a video playing plug-in according to an embodiment of the present disclosure.
Fig. 3 shows a block diagram of a video processing apparatus according to an embodiment of the present disclosure.
Fig. 4 shows a block diagram of an electronic device in accordance with an embodiment of the disclosure.
Detailed Description
Various exemplary embodiments, features and aspects of the present disclosure will be described in detail below with reference to the accompanying drawings. In the drawings, like reference numbers can indicate functionally identical or similar elements. While the various aspects of the embodiments are presented in drawings, the drawings are not necessarily drawn to scale unless specifically indicated.
The word "exemplary" is used exclusively herein to mean "serving as an example, embodiment, or illustration. Any embodiment described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other embodiments.
The term "and/or" herein is merely an association describing an associated object, meaning that three relationships may exist, e.g., a and/or B, may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the term "at least one" herein means any one of a plurality or any combination of at least two of a plurality, for example, including at least one of A, B, C, and may mean including any one or more elements selected from the group consisting of A, B and C.
It should be understood that the terms "first," "second," and "third," etc. in the claims, description, and drawings of the present disclosure are used for distinguishing between different objects and not for describing a particular order. The terms "comprises" and "comprising," when used in the specification and claims of this disclosure, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
Furthermore, in the following detailed description, numerous specific details are set forth in order to provide a better understanding of the present disclosure. It will be understood by those skilled in the art that the present disclosure may be practiced without some of these specific details. In some instances, methods, means, elements and circuits that are well known to those skilled in the art have not been described in detail so as not to obscure the present disclosure.
Fig. 1 shows a flowchart of a video playing method according to an embodiment of the present disclosure, and as shown in fig. 1, the video playing method includes:
in step S11, a video packet of a video stream is obtained, where the video packet includes a data packet corresponding to at least one video frame in the video stream;
in step S12, sequentially detecting loss information of packets corresponding to video frames in the video stream based on the video packets;
in step S13, determining whether the target video frame satisfies a video frame removal condition according to the loss information corresponding to the currently detected target video frame;
in step S14, when the target video frame satisfies the video frame removal condition, at least one video frame corresponding to the target video frame is removed from the video stream, resulting in a processed video stream.
In one possible implementation, the video playing method may be performed by a terminal device, which may be a User Equipment (UE), a mobile device, a User terminal, a cellular phone, a cordless phone, a Personal Digital Assistant (PDA), a handheld device, a computing device, a vehicle-mounted device, a wearable device, or the like, and the method may be implemented by a processor calling computer readable instructions stored in a memory.
In one possible implementation, the video stream may be a video stream of a preset geographic area captured by an image capture device (e.g., a camera, an electronic eye), and may be video data encoded by using H264 or H265 video encoding technology. It should be understood that an associated video encoder may be provided in the image capturing device to encode the captured original video stream to obtain an encoded video stream. The preset geographic area may be, for example, a cell, a school, a street, and the like, and the embodiment of the present disclosure is not limited thereto.
The image capturing device and the terminal device may be connected through a network, and the embodiment of the disclosure is not limited thereto.
It should be understood that network transmission of data between devices may be accomplished in the form of messages. In a possible implementation manner, any known message generation manner may be adopted to convert the encoded video stream into a video message, so as to transmit the video stream acquired by the image acquisition device to the terminal device for decoding and playing. It should be understood that a packet corresponding to at least one video frame in a video stream may be included in a video message, and any video frame may correspond to a plurality of packets.
In a possible implementation manner, the video stream may be transmitted by using a real-Time Streaming protocol RTSP (real Time Streaming protocol), that is, the video packet of the video stream may be transmitted to the terminal device by using the RTSP. Of course, other data transmission protocols may be used, and the embodiments of the present disclosure are not limited thereto.
In a possible implementation manner, the obtained video packets may be cached in a cache space, so as to perform detection of loss information of data packets corresponding to video frames in a video stream based on a video packet sequence.
It is known that a data packet in a video packet may be provided with a sequence identifier to identify a video frame corresponding to the data packet and an order of the video frames corresponding to the data packet, so as to facilitate decoding and playing the video frames in order. In one possible implementation, in step S12, the loss information of the data packet may include the number of lost data packets or the sequence identification of the lost data packets. The lost data packets refer to data packets lost in the data packets corresponding to the video frames, and the number of the lost data packets can be determined according to the sequence identifiers of the lost data packets.
For example, assume that any video frame converted to a video message in the image capture device may correspond to a data packet identified in the sequence as 0-150. If the target video frame is detected to correspond to the data packets with the sequence identifications of 0-40 and 50-150, the data packets with the sequence identifications of 41-49 are lost, the number of the lost data packets is 9, and the sequence identifications of the lost data packets are 41-49.
It should be understood that any known packet loss detection method may be used to detect the loss information of the data packets corresponding to the video frames in the video stream based on the video packet sequence, and the embodiment of the present disclosure is not limited thereto.
In a possible implementation manner, in step S13, the video frame removal condition may be set according to actual requirements, for example, the number of lost packets is greater than or equal to a preset value (e.g. 3), it is understood that there are lost packets (which may be continuously lost or intermittently lost) in the target video frame, and the number of lost packets is greater than or equal to the preset value; alternatively, the number of consecutively lost data packets (i.e. the number of consecutively lost data packets) may be greater than or equal to a preset value (e.g. 3), and the embodiment of the present disclosure is not limited thereto.
In a possible implementation manner, in step S14, the target video frame satisfies the video frame removal condition, for example, the number of lost packets corresponding to the target video frame is greater than or equal to a preset value, or the number of consecutive lost packets in the packets corresponding to the target video frame is greater than or equal to a preset value, which is not limited by the embodiment of the present disclosure.
In one possible implementation, in step S14, removing at least one video frame corresponding to the target video frame from the video stream may include: removing the target video frame; or, removing the group of pictures where the target video frame is located. The frame group comprises a key frame and at least one non-key frame, and the target video frame can be the key frame in the frame group or the non-key frame.
It should be understood that the video stream is transmitted in the form of video packets, and the video packets may be buffered in the buffer space. Removing at least one video frame corresponding to the target video frame from the video stream, which may be, in fact, removing a data packet corresponding to the at least one video frame from the buffer space, for example, a data packet corresponding to the target video frame, or a data packet corresponding to a group of pictures where the target video frame is located; accordingly, the processed video stream may be understood as a video packet with the relevant data packet removed from the buffer space.
In one possible implementation, the processed video stream may be decoded and played. The processed video stream may be decoded by any known video decoding technique, and the decoded video stream may be played by a known video playing technique, which is not limited in this embodiment of the disclosure.
As described above, the video packets of the video stream may be buffered in the buffer space. It should be understood that the video packets (i.e., the processed video stream) in the buffer space may be uniformly fetched and sent to the relevant video decoder for decoding, so as to obtain the decoded video stream.
It should be understood that, although the video stream after the video frame is removed may be jammed during playing, the data packets corresponding to the remaining video frames may be complete, the video frame decoded and displayed based on the complete data packet is clear, and the video frame missing a certain data packet during playing may cause the video frame to generate a screen splash.
According to the embodiment of the disclosure, when it is determined that the target video frame meets the video frame removal condition based on the loss information of the data packet of the target video frame, the corresponding video frame in the video stream is removed, so that when the processed video stream is played, the video picture is clear, and the video playing requirement of displaying the clear picture in a scene with a poor network condition is met.
As described above, the determining whether the target video frame satisfies the video frame removal condition in step S13 includes:
and under the condition that the loss number of the data packets corresponding to the target video frame is greater than or equal to a first number threshold value, determining that the target video frame meets the video frame removal condition. By the method, whether the target video frame meets the video frame removal condition or not can be determined conveniently.
The first number threshold may be set according to actual requirements, for example, the number of the first number thresholds may be 3, 6, and the like, which is not limited in this disclosure.
As described above, the number of lost packets may be determined according to the sequence identifier of the packets in the video message, but the number of lost packets may be continuously lost or intermittently lost, and the embodiment of the present disclosure is not limited thereto.
It should be understood that, in the case that the number of lost packets corresponding to the target video frame is smaller than the first number threshold, if it is determined that the target video frame does not satisfy the video frame removal condition, the target video frame may be retained, and the detection of the next video frame of the video stream may be continued.
As described above, the loss information of the data packet may include the sequence identifier of the lost data packet, and in one possible implementation, in step S13, the determining whether the target video frame satisfies the video frame removal condition according to the loss information corresponding to the currently detected target video frame includes:
determining the number of data packets continuously lost by the target video frame according to the sequence identification of the lost data packets corresponding to the target video frame;
and determining that the target video frame meets the video frame removal condition under the condition that the number of data packets continuously lost by the target video frame is greater than or equal to a second number threshold.
The number of consecutive lost packets is understood to be the number of consecutive lost packets in the sequence identifier, for example, the sequence identifiers 41, 42, 43 may be consecutive sequence identifiers, and the sequence identifiers 41, 43, 46 may be discontinuous sequence identifiers.
As described above, the data packet corresponding to the target video frame has a sequence identifier, and based on the sequence identifier of the data packet corresponding to the target video frame, the sequence identifier of the lost data packet can be determined; and then, according to the sequence identification of the lost data packets, the number of the continuously lost data packets can be determined. For example, if the sequence identifiers of the packets corresponding to the currently detected target video frame include 0-39,41-50, and 60-100, it can be determined that the sequence identifiers of the missing packets are 40, 51-59, and the number of consecutive missing packets is 9.
The second number threshold may be set according to actual requirements, for example, the number of thresholds may be 3, 6, and the like, which is not limited in this disclosure. It should be understood that the second quantity threshold may be the same as or different from the first quantity threshold.
It should be understood that, in the case that the number of data packets continuously lost by the target video frame is less than the second number threshold, and it is determined that the target video frame does not satisfy the video frame removal condition, the target video frame may be retained, and the detection of the next video frame of the video stream may be continued.
In the embodiment of the present disclosure, the number of consecutive lost data packets can be determined based on the sequence identifier of the lost data packet, and whether the target video frame meets the video frame removal condition can be effectively determined based on the number of consecutive lost data packets.
In one possible implementation manner, in step S14, removing at least one video frame corresponding to the target video frame from the video stream to obtain a processed video stream, including:
under the condition that the target video frame is a key frame, removing a first picture group where the target video frame is located from the video stream to obtain a processed video stream, wherein the first picture group comprises the target video frame and a non-key frame of which the time sequence is behind the target video frame; or, under the condition that the target video frame is a non-key frame, removing the target video frame from the video stream to obtain the processed video stream.
Wherein, the key frame is also an intra-frame prediction frame I frame; the non-key frames may include: a forward inter-frame prediction frame P frame, a bidirectional inter-frame prediction frame B frame. A Group of Pictures (GOP) may include a key frame and at least one non-key frame. A GOP is composed of a series of I, P, B frames in a fixed pattern. I-frame coding is to reduce spatial domain redundancy and P-frames and B-frames are to reduce temporal domain redundancy.
It should be understood that in the encoded video stream, the key frame is an independent frame with complete information, and the non-key frame contains the difference information with the previous frame and/or the next frame. Decoding of non-key frames typically relies on information in the key frames. If the key frame has more lost data packets, the picture quality of the key frame is poor, and the picture quality of the decoded non-key frame is also poor on the basis of the key frame with poor picture quality.
Therefore, in the case that the target video frame is a key frame, the first frame group where the target video frame is located can be removed from the video stream, that is, the target video frame and the non-key frame between the target video frame and the next key frame are removed. Wherein the next key frame may be a key frame whose capture timing is subsequent to the target video frame.
Correspondingly, if the key frame in a picture group does not satisfy the video frame removal condition, that is, the key frame does not lose a data packet or does not lose too many data packets, the picture quality of the key frame is better, and on this basis, removing any non-key frame in the picture group has less influence on the picture quality of other non-key frames in the whole picture group. Therefore, when the target video frame is a non-key frame, the target video frame can be removed from the video stream, that is, the target video frame is removed from the group of pictures in which the target video frame is located.
It should be understood that more than one group of pictures can be removed from a video stream, and that more than one non-key frame can be removed from a group of pictures. The number of removed groups of pictures or removed target video frames in the processed video stream may be more than one.
As described above, the video stream is transmitted in the form of video packets, and the video packets may be buffered in the buffer space. Removing the target video frame from the video stream, which may be actually removing the data packet corresponding to the target video frame from the buffer space; removing the first group of pictures where the target video frame is located from the video stream, which may be removing a data packet corresponding to the first group of pictures from the buffer space; accordingly, the processed video stream may be understood as a video packet with the relevant data packet removed from the buffer space.
In the embodiment of the present disclosure, corresponding video frame removal operations can be performed on target video frames of a key frame and a non-key frame, respectively, so that the picture quality of the decoded and played video frame is higher in definition.
Considering that, if a plurality of consecutive non-key frames in a group of pictures are removed from the video stream, and the non-key frames subsequent to the removed plurality of consecutive non-key frames are decoded, the picture quality of the obtained image is usually not high, and in order to meet the video playing requirement for displaying a clear picture, in a possible implementation manner, in step S14, at least one video frame corresponding to the target video frame is removed from the video stream, so as to obtain a processed video stream, further comprising:
under the condition that the target video frame is a non-key frame, judging whether N non-key frames with time sequences in front of the target video frame are removed from the video stream or not, wherein N is a positive integer;
and in the case that the N previous continuous non-key frames are removed from the video stream, removing the target video frame from the video stream, and removing the non-key frame which is positioned behind the target video frame in the second picture group in which the target video frame is positioned.
In a possible implementation manner, the value of N may be set according to actual requirements, for example, the value may be set to 2 or 3, and the embodiment of the present disclosure is not limited thereto. The N previous consecutive non-key frames may be understood as N consecutive non-key frames in the capture timing sequence, and the capture timing sequence is the non-key frame preceding the target video frame.
It should be understood that N consecutive non-key frames whose timing is before the target video frame are removed from the video stream, meaning that the N consecutive non-key frames whose timing is before the target video frame all satisfy the video frame removal condition and the data packets corresponding to the N consecutive non-key frames whose timing is before the target video frame have been removed from the buffer space.
In fact, the data packets corresponding to the target video frame in the second group of pictures in which the target video frame is located may be removed from the buffer space, and the data packets corresponding to the non-key frame in the second group of pictures in which the target video frame is located may be removed.
For example, assuming that N is 2, the second group of pictures in which the target video frame is located includes 1 key frame and 24 non-key frames, the currently detected target video frame is the 12 th non-key frame, the target video frame has satisfied the video frame removal condition, and the 10 th and 11 th non-key frames have been removed from the video stream, then the 12 th to 24 th non-key frames are removed from the second group of pictures, that is, the target video frame (the 12 th non-key frame) is removed from the video stream, and the non-key frames (the 13 th to 24 th non-key frames) whose timing sequence is after the target video frame in the second group of pictures in which the target video frame is located are removed.
In the embodiment of the present disclosure, the corresponding video frame removal operation can be performed on the video frames that continuously satisfy the video frame removal condition, so that the picture quality of the decoded and played video frames is higher in definition.
In one possible implementation manner, in step S13, the determining whether the target video frame meets the video frame removal condition according to the loss information corresponding to the currently detected target video frame includes:
determining that the target video frame satisfies a video frame removal condition under the condition that the loss information indicates that the target video frame is lost;
in step S14, removing at least one video frame corresponding to the target video frame from the video stream to obtain a processed video stream, including:
and removing a third picture group where the target video frame is located from the video stream to obtain the processed video stream, wherein the third picture group comprises a key frame corresponding to the target video frame and a non-key frame of which the time sequence is behind the key frame corresponding to the target video frame.
In some cases, a whole frame of video frame may be lost, or data packets corresponding to the whole frame of video frame in the video packet may be lost. In this case, meaning that all packets of the entire frame of the lost video frame are not detected, the loss information of the packets may indicate that all packets of the target video frame are lost, i.e., that the target video frame is lost. It should be understood that the above loss information includes the number of lost packets or the sequence identifier of the lost packets, which means that the packets of the target video frame are detected, and the corresponding lost packets of the target video frame; the loss information indicates that the target video frame is lost, meaning that not all packets of the target video frame are detected.
Considering that all data packets of the whole video frame are lost, which means that the network condition may be very poor, no matter whether the target video frame is a key frame or a non-key frame, the picture quality of the picture group where the target video frame is located is usually not high, and the third picture group where the target video frame is located is removed from the video stream, although the played picture may be jammed, the played video picture can be a picture meeting the definition requirement.
Wherein, the key frame corresponding to the target video frame can be understood as the key frame in the picture group where the target video frame is located; the non-key frames whose time sequence is after the key frame corresponding to the target video frame can be understood as all non-key frames in the picture group where the target video frame is located.
As described above, the video stream is transmitted in the form of video packets, and the video packets may be buffered in the buffer space. Removing the third group of pictures where the target video frame is located from the video stream, which may be removing the data packet corresponding to the third group of pictures from the buffer space; accordingly, the processed video stream may be understood as a video packet with the relevant data packet removed from the buffer space.
In the embodiment of the present disclosure, when the entire video frame is lost, the third group of pictures where the target video frame with all lost data packets is located is removed, so that the played video picture can be a picture meeting the definition requirement.
As described above, the video packets of the video frames may be buffered in the buffer space. In a possible implementation manner, acquiring a video packet of a video stream includes: and caching the video message of the video stream in the cache space.
In one possible implementation, the method may further include:
sequentially detecting the out-of-order quantity of data packets corresponding to video frames in a video stream based on the video message;
and adjusting the capacity of the cache space according to the loss quantity and/or the out-of-order quantity of the data packets detected in the preset time period to obtain the adjusted cache space, wherein the loss information of the data packets comprises the loss quantity of the data packets.
As described above, the loss information of the data packet may include the number of losses of the data packet, and may also include the sequence identification of the lost data packet. According to the embodiment of the disclosure, the capacity of the buffer space can be adjusted according to the loss quantity of the data packets. It should be understood that the number of lost packets may be consecutive or may be intermittent.
In one possible implementation, the buffer space may buffer the video packets in a queue-based manner, and the buffer space may be referred to as a buffer queue. The video message is cached in the cache space, and any known caching technology can be used for implementing the caching, which does not limit the embodiment of the present disclosure.
As described above, a packet in a video packet has a sequence identifier, and the sequence identifier may indicate a video frame corresponding to the packet and an order of the video frames corresponding to the packet. Based on the sequence identification of the data packets, whether the data packets corresponding to the video frames are out of order or not and the out-of-order quantity of the data packets can be detected; or detecting whether the video frames corresponding to the data packets are out of order or not and the out-of-order number of the video frames, wherein it should be understood that the out-of-order number of the video frames is actually the out-of-order number of all the data packets corresponding to the video frames.
For example, the data packets corresponding to the video frame a are the sequence identifiers a0-a100, the data packets corresponding to the video frame B are the sequence identifiers B0-B100, the data packets corresponding to the video frame C are the sequence identifiers C0-C100, the capture timing of the video frame a is before the video frame B, and the capture timing of the video frame C is after the video frame B.
Normally, the data packets in the video message can be arranged as A0-A100, B0-B100 and C0-C100; if the data packets in the obtained video message are arranged as B0-B100, a0, A3, A1, a2-a100, and C0-C100, it can be known that the video frame B is out of order (i.e., all the data packets of the video frame B are out of order), and the corresponding data packet A3 in the video frame a is out of order.
To facilitate determining the out-of-order number, the out-of-order number may be determined based on the number of out-of-order video frames, and/or may be determined based on the number of out-of-order packets in the video frames, which is not limited by the disclosed embodiments. For example, in the above example, the video frame B is scrambled, and the corresponding packet a3 in the video frame a is scrambled, and the number of the scrambled packets is determined to be 2 according to both of them, or the number of the scrambled packets is determined to be 1 according to the scrambled video frame B.
It should be understood that the above-mentioned out-of-order quantity calculation method is an implementation manner provided by the embodiment of the present disclosure, and in fact, a person skilled in the art may determine the out-of-order quantity calculation method according to actual requirements, for example, the out-of-order quantity may also be counted as 101 due to the out-of-order of the video frame B, that is, the data packets B0-B100 are out-of-order, which is not limited by the embodiment of the present disclosure.
In a possible implementation manner, the preset time period may be set according to an actual requirement, for example, may be set to 5 seconds, and the embodiment of the present disclosure is not limited thereto. The number of lost packets may be determined by referring to the manner disclosed in the embodiments of the present disclosure, and is not described herein again.
It should be understood that in some scenes with poor network conditions, in order to meet the video playing requirement for displaying a clear picture, the processing of removing the data packets corresponding to the video frames, sorting the data packets out of order, and the like is generally performed in the buffer space. In these cases, it is generally desirable that the buffer space is available to buffer more video packets in order to facilitate the processing.
Accordingly, in some scenes with better network conditions, the processes of removing the data packets corresponding to the video frames, sorting the out-of-order data packets, and the like may not need to be performed, and in these circumstances, it is generally expected that the video playing delay may be low, or there is a video playing requirement with low delay, and at this time, it may be expected that the buffer space buffers fewer video messages.
The capacity of the buffer space is understood to be the amount of data that can be buffered in the buffer space. It should be understood that the larger the capacity, the larger the amount of cacheable data, and the longer the playback delay of the video; conversely, the smaller the capacity, the smaller the amount of cacheable, and the smaller the playback delay of the video.
It should be understood that, among the data packets detected within the preset time period, there may be a lost data packet or an out-of-order data packet; there may also be lost packets and out of order packets. And according to the number of lost data packets and/or the number of disorder data packets detected in the preset time period, the current network condition can be known.
The loss number and/or the disorder number are/is large, which means that the network condition is poor, and at the moment, the capacity of a cache space can be increased so as to remove data packets corresponding to video frames, sort out the disorder data packets and the like; the loss quantity and/or the disorder quantity are less, which means that the network condition is better, and at the moment, the capacity of the cache space can be reduced, so that the video playing effect with low time delay is achieved.
It should be understood that the adjusted buffer space is used for buffering video messages of the video stream, and may buffer a currently acquired video message, or buffer a video message to be acquired.
In the related art, the capacity of the buffer space is usually fixed, so that the video playing requirement under different network conditions cannot be met. In the embodiment of the disclosure, the capacity of the buffer queue can be adjusted according to different network conditions, and the buffer queue with variable capacity is realized, so that not only is the video playing quality convenient to optimize in time, but also the low-delay playing effect can be realized.
In a possible implementation manner, adjusting the capacity of the buffer space according to the number of lost packets and/or the number of out-of-order packets detected within a preset time period to obtain an adjusted buffer space includes:
under the condition that the loss number and/or the disorder number exceed a preset threshold, increasing the capacity of the cache space according to a preset amplification factor to obtain an adjusted cache space; or, under the condition that the loss number and/or the disorder number do not exceed the preset threshold, reducing the capacity of the cache space according to the preset reduction multiple to obtain the adjusted cache space.
In a possible implementation manner, the preset threshold may be determined according to historical experience, a calculation manner of the number of missing bits, a calculation manner of the out-of-order number, and the like, and the embodiment of the present disclosure is not limited thereto. The preset magnification factor and the preset reduction factor may be set according to actual requirements, for example, the preset magnification factor and the preset reduction factor may be set to be enlarged to one time of the capacity of the current cache space, reduced to one half of the capacity of the current cache space, and the like, and the embodiment of the present disclosure is not limited thereto.
The lost number and/or the out-of-order number exceed a preset threshold, which may be considered as a poor network condition, and at this time, the capacity of the buffer space may be increased according to a preset amplification factor. It should be understood that the buffer space may not be increased indefinitely, and if the network conditions are worse for a long time, the capacity of the buffer space may not be increased further after the network conditions are increased to the specified capacity. The capacity of the buffer space may be increased once or multiple times, which may be set according to actual requirements, and the embodiment of the present disclosure is not limited thereto.
Accordingly, the lost number and/or the out-of-order number do not exceed the preset threshold, which may be considered as a better network condition, and at this time, the capacity of the buffer space may be reduced according to the preset magnification. It should be understood that the buffer space may not be infinitely reduced, and if the network condition is good for a long time, the capacity of the buffer space may not be further reduced after the network condition is reduced to the specified capacity. The reduction of the capacity of the buffer space may be performed once or multiple times, and may be set according to actual requirements, which is not limited in the embodiments of the present disclosure.
In the embodiment of the present disclosure, the capacity of the buffer space can be correspondingly adjusted based on the preset threshold, the preset magnification factor, and the preset reduction factor, so that the video playing requirements under different network conditions can be effectively met.
As described above, based on the video packets, the out-of-order number of the packets corresponding to the video frames in the video stream can be sequentially detected, that is, there may be some out-of-order packets in the packets corresponding to the currently detected target video frame, or the packets corresponding to the target video frame are all out-of-order packets. In one possible implementation, the method further includes:
and under the condition that the data packets corresponding to the target video frame have the disordered data packets, sequencing the disordered data packets according to the sequence identification of the data packets corresponding to the target video frame to obtain the processed video stream. By the method, the sequence of the video frames in the processed video stream is correct, or the sequence of the data packets in the video message is correct, so that the picture content played by the video is correct, and a better playing effect is obtained.
The out-of-order data packets are present, which can be understood as that partial out-of-order data packets are present in the data packets corresponding to the target video frame, or the data packets corresponding to the target video frame are all out-of-order data packets.
It should be understood that the sequence identifier of the data packet may indicate the sequence of the video frame corresponding to the data packet, and may also indicate the sequence of the data packet, and according to the sequence identifier of the data packet corresponding to the target video frame, the out-of-order data packet can be sorted to obtain a video stream with a correct arrangement order. For example, following the above example, if the packet arrangement in the video packet is B0-B100, a0, A3, A1, a2-a100, and C0-C100, the video packet with the packet arrangement of a0-a100, B0-B100, and C0-C100 can be obtained after the out-of-order packets are sorted.
As described above, the video stream is transmitted in the form of video packets, and the video packets may be buffered in the buffer space. The out-of-order data packets can be sequenced in the buffer space, and the processed video stream is also the sequenced video message in the buffer space.
It should be understood that, the above-mentioned removing of at least one video frame corresponding to the target video frame and the sorting of the out-of-order data packets may be performed simultaneously, or may be performed according to a set sequence, and the embodiment of the present disclosure is not limited thereto. The processed video stream may include a video stream obtained by removing at least one video frame corresponding to the target video frame and/or sorting out-of-order data packets.
In one possible implementation, the method further includes:
analyzing the coding rate of the video stream and the coding format of the video stream based on the video message; and sending the processed video stream to a video decoder corresponding to the coding format according to the coding code rate to obtain and play the decoded video stream.
It should be understood that the video message may include related information indicating an encoding rate, a resolution, a frame rate, an encoding format, and the like. In a possible implementation manner, any known video parsing manner may be adopted to parse the video packet in the buffer space to obtain the coding rate and the coding format of the video stream, which is not limited in this embodiment of the disclosure.
In a possible implementation manner, the resolution of the video stream may also be analyzed first, and then the coding rate of the video stream is obtained according to the correspondence between the resolution and the coding rate. The embodiment of the present disclosure does not limit the determination method of the coding rate.
It should be understood that the encoding format of the video stream is related to the video encoding technology, for example, the encoding formats corresponding to the encoded video streams may be different by using the H264 and H265 video encoding technologies. Based on the different encoding formats, corresponding video decoders may be determined, thereby enabling video decoding for video streams of different encoding formats.
It is considered that in some scenarios where network fluctuation is large, network congestion may occur. The network congestion phenomenon may be understood as that the data volume of a data packet transmitted at a certain time sharply increases, and the normal data transmission volume is restored after a period of time. In order to alleviate the network congestion phenomenon, in the embodiment of the present disclosure, the obtained video packet may be cached in a cache space, and the data packet in the cache space is uniformly sent to a video decoder for decoding according to the coding rate.
It should be understood that the code rate of the code may be understood as the number of bits transmitted per second (bit). And sending the processed video stream to a video decoder corresponding to the coding format according to the coding rate, wherein the step of taking out a data packet with a certain bit number from the buffer space every second and sending the data packet to the video decoder corresponding to the coding format to obtain the decoded video stream.
In one possible implementation, playing the decoded video stream may include: and rendering pictures based on the decoded video stream, and sending the rendered video pictures to a display card of the terminal equipment for playing. It should be understood that any known video playing technology may be used to implement playing of the decoded video stream, and the embodiments of the present disclosure are not limited thereto.
In the embodiment of the disclosure, the processed video stream can be effectively played based on the coding rate and the coding format.
Fig. 2 shows a schematic structural diagram of a video playing plug-in according to an embodiment of the present disclosure. As shown in fig. 2, the video playing plug-in may include:
the stream fetching component 101 is configured to obtain a video packet, cache the obtained video packet in a cache space, remove a data packet corresponding to a video frame based on the video packet, sort the data packet out of order, and send the data packet corresponding to the video stream processed in the cache space to the decoding component 102;
the decoding component 102 is configured to decode a data packet corresponding to the processed video stream according to the encoding format to obtain a decoded video stream, and send the decoded video stream to the rendering component 103;
and the rendering component 103 is configured to perform picture rendering according to the decoded video stream, and send a rendered video picture to a display card for playing.
In a possible implementation manner, the workflow of the video playing plug-in may include: acquiring a video message of a video stream from RTSP-supported image acquisition equipment through a stream acquisition component 101, performing corresponding processing according to the video playing method, and sending the processed video stream to a decoding component 102; selecting different video decoders through the decoding component 102 according to the encoding formats, and sending the decoded video streams to the rendering component 103; and performing picture rendering based on the decoded video stream through the rendering component 103 to obtain a rendered video picture, and sending the rendered video picture to a display card for playing.
In a possible implementation, the video stream to be played may be transmitted using the real-time streaming protocol RTSP and encoded using H264 or H265 video coding techniques.
In one possible implementation, the decoding component 102 may invoke a hardware decoder of the terminal device for decoding, which is not limited to the embodiment of the present disclosure.
In one possible implementation, the decoding component 102 may further be packaged with various software video decoders for decoding video streams with different encoding formats.
In a possible implementation manner, the stream fetching component 101 may be packaged into a Source file (Source Filter) under a DirectShow framework based on a DirectShow (development kit for streaming media processing) technology.
In a possible implementation manner, the source file may further include a live555 (an open source streaming media protocol processing auxiliary module) for acquiring a video packet transmitted based on the RTSP.
In one possible implementation, the software toolkit (SDK) provided by DirectShow may also be used to simplify the source file development described above, as well as to simplify the development of video playback plug-ins.
In one possible implementation, the rendering component 103 may employ a rendering adapter based on GDI (Graphics Device Interface) coding, and is configured to send the decoded video stream to an Interface rendering thread for rendering.
In a possible implementation manner, a video player packaging library developed based on DirectShow may be adopted to package the three components to obtain a video player.
In one possible implementation, the video playing plug-in used in the IE browser may be packaged by using libPlayer (interface of abstraction layer of multimedia) based library and OCX (Object Linking and Embedding (OLE) Control Extension) technology or D3D (Direct3D, which is a display program interface developed by microsoft to improve the display performance of three-dimensional games in Windows operating system).
According to the video playing plug-in the embodiment of the present disclosure, the video playing plug-in can be installed in an IE browser, and play the video stream transmitted by RTSP in real time through a display interface of the IE browser. The method can be applied to security monitoring scenes and intelligent edge series products.
According to the embodiment of the disclosure, a video playing plug-in supporting RTSP, H264 and H265 and used on IE with flexible installation, low delay and adaptive jitter can be realized. After the video playing plug-in is integrated in the IE browser, the video stream transmitted by eight video transmission channels can be previewed simultaneously.
According to the embodiment of the disclosure, the video playing effect is better, and low delay is realized; under the condition of poor network, the network congestion phenomenon can be self-adapted; the H265 code stream playing can be supported; GDI and/or D3D rendering can be supported.
It is understood that the above-mentioned method embodiments of the present disclosure can be combined with each other to form a combined embodiment without departing from the logic of the principle, which is limited by the space, and the detailed description of the present disclosure is omitted. Those skilled in the art will appreciate that in the above methods of the specific embodiments, the specific order of execution of the steps should be determined by their function and possibly their inherent logic.
In addition, the present disclosure also provides a video processing apparatus, an electronic device, a computer-readable storage medium, and a program, which can be used to implement any video processing method provided by the present disclosure, and the corresponding technical solutions and descriptions and corresponding descriptions in the methods section are not repeated.
Fig. 3 shows a block diagram of a video processing apparatus according to an embodiment of the present disclosure, which, as shown in fig. 3, includes:
an obtaining module 101, configured to obtain a video packet of a video stream, where the video packet includes a data packet corresponding to at least one video frame in the video stream;
a detection module 102, configured to sequentially detect, based on the video packet, loss information of data packets corresponding to video frames in the video stream;
the determining module 103 is configured to determine whether a target video frame meets a video frame removal condition according to loss information corresponding to the currently detected target video frame;
a processing module 104, configured to remove at least one video frame corresponding to the target video frame from the video stream to obtain a processed video stream when the target video frame meets a video frame removal condition.
In a possible implementation manner, the loss information of the data packet includes a number of lost data packets, where the determining module 103 includes: the first determining submodule is used for determining that the target video frame meets a video frame removing condition under the condition that the loss number of the data packets corresponding to the target video frame is greater than or equal to a first number threshold value.
In a possible implementation manner, the loss information of the data packet includes a sequence identifier of the lost data packet, where the determining module 103 includes: the data determining submodule is used for determining the number of data packets continuously lost by the target video frame according to the sequence identification of the lost data packets corresponding to the target video frame; and the second determining submodule is used for determining that the target video frame meets a video frame removal condition under the condition that the number of the data packets continuously lost by the target video frame is greater than or equal to a second number threshold.
In one possible implementation manner, the processing module 104 includes: a first removing submodule, configured to remove, from the video stream, a first picture group in which the target video frame is located when the target video frame is a key frame, to obtain a processed video stream, where the first picture group includes the target video frame and a non-key frame whose timing sequence is after the target video frame; or, the second removing submodule is configured to remove the target video frame from the video stream to obtain a processed video stream when the target video frame is a non-key frame.
In a possible implementation manner, the processing module 104 further includes: the judgment sub-module is used for judging whether N non-key frames with time sequences being continuous before the target video frame are removed from the video stream or not under the condition that the target video frame is a non-key frame, wherein N is a positive integer; a third removing sub-module, configured to remove the target video frame from the video stream and remove a non-key frame that is chronologically subsequent to the target video frame in a second group of pictures in which the target video frame is located, if the previous consecutive N non-key frames have been removed from the video stream.
In a possible implementation manner, the determining module 103 includes: a third determining sub-module, configured to determine that the target video frame meets a video frame removal condition if the loss information indicates that the target video frame is lost; wherein the processing module 104 includes: and the fourth removing submodule is used for removing a third picture group where the target video frame is located from the video stream to obtain a processed video stream, wherein the third picture group comprises a key frame corresponding to the target video frame and a non-key frame behind the key frame corresponding to the target video frame in time sequence.
In a possible implementation manner, acquiring a video packet of a video stream includes: caching the video message of the video stream in a cache space; the device further comprises: the disorder detection module is used for sequentially detecting the disorder quantity of data packets corresponding to the video frames in the video stream based on the video message; and the capacity adjusting module is used for adjusting the capacity of the cache space according to the loss quantity and/or the out-of-order quantity of the data packets detected in the preset time period to obtain the adjusted cache space, wherein the loss information of the data packets comprises the loss quantity of the data packets.
In one possible implementation, the capacity adjustment module includes: the first adjusting submodule is used for increasing the capacity of the cache space according to a preset amplification factor under the condition that the loss quantity and/or the disorder quantity exceed a preset threshold value, so that the adjusted cache space is obtained; or, the second adjusting submodule is configured to reduce the capacity of the cache space according to a preset reduction multiple under the condition that the loss number and/or the disorder number do not exceed the preset threshold, so as to obtain an adjusted cache space.
In one possible implementation, the apparatus further includes: and the sequencing module is used for sequencing the out-of-order data packets according to the sequence identification of the data packets corresponding to the target video frame under the condition that the out-of-order data packets exist in the data packets corresponding to the target video frame, so as to obtain the processed video stream.
In one possible implementation, the apparatus further includes: the analysis module is used for analyzing the coding rate of the video stream and the coding format of the video stream based on the video message; and the decoding module is used for sending the processed video stream to a video decoder corresponding to the coding format according to the coding code rate to obtain and play the decoded video stream.
According to the embodiment of the disclosure, when it is determined that the target video frame meets the video frame removal condition based on the loss information of the data packet of the target video frame, the corresponding video frame in the video stream is removed, so that when the processed video stream is played, the video picture is clear, and the video playing requirement of displaying the clear picture in a scene with a poor network condition is met.
In some embodiments, functions of or modules included in the apparatus provided in the embodiments of the present disclosure may be used to execute the method described in the above method embodiments, and specific implementation thereof may refer to the description of the above method embodiments, and for brevity, will not be described again here.
Embodiments of the present disclosure also provide a computer-readable storage medium having stored thereon computer program instructions, which when executed by a processor, implement the above-mentioned method. The computer readable storage medium may be a non-volatile computer readable storage medium.
An embodiment of the present disclosure further provides an electronic device, including: a processor; a memory for storing processor-executable instructions; wherein the processor is configured to invoke the memory-stored instructions to perform the above-described method.
The disclosed embodiments also provide a computer program product comprising computer readable code, which when run on a device, a processor in the device executes instructions for implementing the video processing method provided in any of the above embodiments.
The embodiments of the present disclosure also provide another computer program product for storing computer readable instructions, which when executed cause a computer to perform the operations of the video processing method provided in any of the above embodiments.
The electronic device may be provided as a terminal, server, or other form of device.
Fig. 4 illustrates a block diagram of an electronic device 800 in accordance with an embodiment of the disclosure. For example, the electronic device 800 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, a fitness device, a personal digital assistant, or the like terminal.
Referring to fig. 4, electronic device 800 may include one or more of the following components: processing component 802, memory 804, power component 806, multimedia component 808, audio component 810, input/output (I/O) interface 812, sensor component 814, and communication component 816.
The processing component 802 generally controls overall operation of the electronic device 800, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing components 802 may include one or more processors 820 to execute instructions to perform all or a portion of the steps of the methods described above. Further, the processing component 802 can include one or more modules that facilitate interaction between the processing component 802 and other components. For example, the processing component 802 can include a multimedia module to facilitate interaction between the multimedia component 808 and the processing component 802.
The memory 804 is configured to store various types of data to support operations at the electronic device 800. Examples of such data include instructions for any application or method operating on the electronic device 800, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 804 may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
The power supply component 806 provides power to the various components of the electronic device 800. The power components 806 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the electronic device 800.
The multimedia component 808 includes a screen that provides an output interface between the electronic device 800 and a user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 808 includes a front facing camera and/or a rear facing camera. The front camera and/or the rear camera may receive external multimedia data when the electronic device 800 is in an operation mode, such as a shooting mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
The audio component 810 is configured to output and/or input audio signals. For example, the audio component 810 includes a Microphone (MIC) configured to receive external audio signals when the electronic device 800 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may further be stored in the memory 804 or transmitted via the communication component 816. In some embodiments, audio component 810 also includes a speaker for outputting audio signals.
The I/O interface 812 provides an interface between the processing component 802 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor assembly 814 includes one or more sensors for providing various aspects of state assessment for the electronic device 800. For example, the sensor assembly 814 may detect an open/closed state of the electronic device 800, the relative positioning of components, such as a display and keypad of the electronic device 800, the sensor assembly 814 may also detect a change in the position of the electronic device 800 or a component of the electronic device 800, the presence or absence of user contact with the electronic device 800, orientation or acceleration/deceleration of the electronic device 800, and a change in the temperature of the electronic device 800. Sensor assembly 814 may include a proximity sensor configured to detect the presence of a nearby object without any physical contact. The sensor assembly 814 may also include a light sensor, such as a Complementary Metal Oxide Semiconductor (CMOS) or Charge Coupled Device (CCD) image sensor, for use in imaging applications. In some embodiments, the sensor assembly 814 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 816 is configured to facilitate wired or wireless communication between the electronic device 800 and other devices. The electronic device 800 may access a wireless network based on a communication standard, such as a wireless network (WiFi), a second generation mobile communication technology (2G) or a third generation mobile communication technology (3G), or a combination thereof. In an exemplary embodiment, the communication component 816 receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 816 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the electronic device 800 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components for performing the above-described methods.
In an exemplary embodiment, a non-transitory computer-readable storage medium, such as the memory 804, is also provided that includes computer program instructions executable by the processor 820 of the electronic device 800 to perform the above-described methods.
The present disclosure may be systems, methods, and/or computer program products. The computer program product may include a computer-readable storage medium having computer-readable program instructions embodied thereon for causing a processor to implement various aspects of the present disclosure.
The computer readable storage medium may be a tangible device that can hold and store the instructions for use by the instruction execution device. The computer readable storage medium may be, for example, but is not limited to, an electronic memory device, a magnetic memory device, an optical memory device, an electromagnetic memory device, a semiconductor memory device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a Static Random Access Memory (SRAM), a portable compact disc read-only memory (CD-ROM), a Digital Versatile Disc (DVD), a memory stick, a floppy disk, a mechanical coding device, such as punch cards or in-groove projection structures having instructions stored thereon, and any suitable combination of the foregoing. Computer-readable storage media as used herein is not to be construed as transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission medium (e.g., optical pulses through a fiber optic cable), or electrical signals transmitted through electrical wires.
The computer-readable program instructions described herein may be downloaded from a computer-readable storage medium to a respective computing/processing device, or to an external computer or external storage device via a network, such as the internet, a local area network, a wide area network, and/or a wireless network. The network may include copper transmission cables, fiber optic transmission, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. The network adapter card or network interface in each computing/processing device receives computer-readable program instructions from the network and forwards the computer-readable program instructions for storage in a computer-readable storage medium in the respective computing/processing device.
The computer program instructions for carrying out operations of the present disclosure may be assembler instructions, Instruction Set Architecture (ISA) instructions, machine-related instructions, microcode, firmware instructions, state setting data, or source or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The computer-readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider). In some embodiments, the electronic circuitry that can execute the computer-readable program instructions implements aspects of the present disclosure by utilizing the state information of the computer-readable program instructions to personalize the electronic circuitry, such as a programmable logic circuit, a Field Programmable Gate Array (FPGA), or a Programmable Logic Array (PLA).
Various aspects of the present disclosure are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer-readable program instructions.
These computer-readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer-readable program instructions may also be stored in a computer-readable storage medium that can direct a computer, programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer-readable medium storing the instructions comprises an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer, other programmable apparatus or other devices implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The computer program product may be embodied in hardware, software or a combination thereof. In an alternative embodiment, the computer program product is embodied in a computer storage medium, and in another alternative embodiment, the computer program product is embodied in a Software product, such as a Software Development Kit (SDK), or the like.
Having described embodiments of the present disclosure, the foregoing description is intended to be exemplary, not exhaustive, and not limited to the disclosed embodiments. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terminology used herein is chosen in order to best explain the principles of the embodiments, the practical application, or improvements made to the technology in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.

Claims (13)

1. A video processing method, comprising:
acquiring a video message of a video stream, wherein the video message comprises a data packet corresponding to at least one video frame in the video stream;
based on the video message, sequentially detecting loss information of data packets corresponding to video frames in the video stream;
judging whether the target video frame meets a video frame removal condition or not according to loss information corresponding to the currently detected target video frame;
and under the condition that the target video frame meets a video frame removal condition, removing at least one video frame corresponding to the target video frame from the video stream to obtain a processed video stream.
2. The method of claim 1, wherein the loss information of the data packets comprises a number of losses of the data packets,
the determining, according to loss information corresponding to a currently detected target video frame, whether the target video frame meets a video frame removal condition includes:
and under the condition that the loss number of the data packets corresponding to the target video frame is greater than or equal to a first number threshold value, determining that the target video frame meets a video frame removal condition.
3. The method according to claim 1 or 2, wherein the loss information of the data packets comprises a sequence identification of lost data packets,
the determining, according to loss information corresponding to a currently detected target video frame, whether the target video frame meets a video frame removal condition includes:
determining the number of data packets which are continuously lost by the target video frame according to the sequence identification of the lost data packets corresponding to the target video frame;
determining that the target video frame satisfies a video frame removal condition if the number of data packets continuously lost by the target video frame is greater than or equal to a second number threshold.
4. The method according to any of claims 1-3, wherein said removing at least one video frame corresponding to the target video frame from the video stream to obtain a processed video stream comprises:
under the condition that the target video frame is a key frame, removing a first picture group where the target video frame is located from the video stream to obtain a processed video stream, wherein the first picture group comprises the target video frame and a non-key frame of which the time sequence is behind the target video frame; or the like, or, alternatively,
and under the condition that the target video frame is a non-key frame, removing the target video frame from the video stream to obtain a processed video stream.
5. The method according to any of claims 1-4, wherein removing at least one video frame corresponding to the target video frame from the video stream to obtain a processed video stream further comprises:
when the target video frame is a non-key frame, judging whether N non-key frames with time sequences being continuous before the target video frame are removed from the video stream or not, wherein N is a positive integer;
in the case that the previous consecutive N non-key frames have been removed from the video stream, removing the target video frame from the video stream, and removing non-key frames in the second group of pictures in which the target video frame is located that are temporally subsequent to the target video frame.
6. The method according to claim 1, wherein said determining whether the target video frame satisfies a video frame removal condition according to the loss information corresponding to the currently detected target video frame comprises:
determining that the target video frame satisfies a video frame removal condition if the loss information indicates that the target video frame is lost;
wherein the removing at least one video frame corresponding to the target video frame from the video stream to obtain a processed video stream includes:
and removing a third picture group where the target video frame is located from the video stream to obtain a processed video stream, wherein the third picture group comprises a key frame corresponding to the target video frame and a non-key frame which is chronologically behind the key frame corresponding to the target video frame.
7. The method of claim 1, wherein obtaining video packets of a video stream comprises: caching the video message of the video stream in a cache space;
the method further comprises the following steps:
sequentially detecting the out-of-order quantity of data packets corresponding to video frames in the video stream based on the video message;
and adjusting the capacity of the cache space according to the loss number and/or the out-of-order number of the data packets detected in a preset time period to obtain the adjusted cache space, wherein the loss information of the data packets comprises the loss number of the data packets.
8. The method according to claim 7, wherein adjusting the capacity of the buffer space according to the number of lost packets and/or the number of out-of-order packets detected within a preset time period to obtain an adjusted buffer space comprises:
under the condition that the loss number and/or the disorder number exceed a preset threshold, increasing the capacity of the cache space according to a preset amplification factor to obtain an adjusted cache space; or the like, or, alternatively,
and under the condition that the loss number and/or the disorder number do not exceed the preset threshold value, reducing the capacity of the cache space according to a preset reduction multiple to obtain the adjusted cache space.
9. The method according to any one of claims 1-8, further comprising:
and under the condition that the data packets corresponding to the target video frame have the disordered data packets, sequencing the disordered data packets according to the sequence identification of the data packets corresponding to the target video frame to obtain the processed video stream.
10. The method according to any one of claims 1-9, further comprising:
analyzing the coding rate of the video stream and the coding format of the video stream based on the video message;
and sending the processed video stream to a video decoder corresponding to the coding format according to the coding rate to obtain and play the decoded video stream.
11. A video processing apparatus, comprising:
the system comprises an acquisition module, a processing module and a processing module, wherein the acquisition module is used for acquiring a video message of a video stream, and the video message comprises a data packet corresponding to at least one video frame in the video stream;
a packet loss detection module, configured to sequentially detect loss information of data packets corresponding to video frames in the video stream based on the video packet;
the judging module is used for judging whether the target video frame meets a video frame removing condition or not according to the loss information corresponding to the currently detected target video frame;
and the processing module is used for removing at least one video frame corresponding to the target video frame from the video stream under the condition that the target video frame meets a video frame removing condition to obtain a processed video stream.
12. An electronic device, comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to invoke the memory-stored instructions to perform the method of any one of claims 1 to 10.
13. A computer readable storage medium having computer program instructions stored thereon, which when executed by a processor implement the method of any one of claims 1 to 10.
CN202110390296.2A 2021-04-12 2021-04-12 Video processing method and device, electronic equipment and storage medium Pending CN113099272A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110390296.2A CN113099272A (en) 2021-04-12 2021-04-12 Video processing method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110390296.2A CN113099272A (en) 2021-04-12 2021-04-12 Video processing method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN113099272A true CN113099272A (en) 2021-07-09

Family

ID=76677162

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110390296.2A Pending CN113099272A (en) 2021-04-12 2021-04-12 Video processing method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113099272A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113903297A (en) * 2021-12-07 2022-01-07 深圳金采科技有限公司 Display control method and system of LED display screen
CN114422866A (en) * 2022-01-17 2022-04-29 深圳Tcl新技术有限公司 Video processing method and device, electronic equipment and storage medium
CN116055803A (en) * 2022-07-29 2023-05-02 荣耀终端有限公司 Video playing method and system and electronic equipment

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080109865A1 (en) * 2006-11-03 2008-05-08 Apple Computer, Inc. Dynamic adjustments of video streams
CN103533451A (en) * 2013-09-30 2014-01-22 广州华多网络科技有限公司 Method and system for regulating jitter buffer
US20150095509A1 (en) * 2013-09-30 2015-04-02 Verizon Patent And Licensing Inc. Adaptive buffers for media players
US20150319212A1 (en) * 2014-05-02 2015-11-05 Imagination Technologies Limited Media Controller
US20160345219A1 (en) * 2015-05-21 2016-11-24 At&T Mobility Ii Llc Facilitation of adaptive dejitter buffer between mobile devices
CN106303693A (en) * 2015-05-25 2017-01-04 北京视联动力国际信息技术有限公司 A kind of method and device of video data decoding
CN107483976A (en) * 2017-09-26 2017-12-15 武汉斗鱼网络科技有限公司 Live management-control method, device and electronic equipment
WO2018090573A1 (en) * 2016-11-18 2018-05-24 深圳市中兴微电子技术有限公司 Buffer space management method and device, electronic apparatus, and storage medium
CN109246486A (en) * 2018-10-16 2019-01-18 视联动力信息技术股份有限公司 A kind of framing method and device
CN110012363A (en) * 2019-04-18 2019-07-12 浙江工业大学 A kind of video chat system based on Session Initiation Protocol

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080109865A1 (en) * 2006-11-03 2008-05-08 Apple Computer, Inc. Dynamic adjustments of video streams
CN103533451A (en) * 2013-09-30 2014-01-22 广州华多网络科技有限公司 Method and system for regulating jitter buffer
US20150095509A1 (en) * 2013-09-30 2015-04-02 Verizon Patent And Licensing Inc. Adaptive buffers for media players
US20150319212A1 (en) * 2014-05-02 2015-11-05 Imagination Technologies Limited Media Controller
US20160345219A1 (en) * 2015-05-21 2016-11-24 At&T Mobility Ii Llc Facilitation of adaptive dejitter buffer between mobile devices
CN106303693A (en) * 2015-05-25 2017-01-04 北京视联动力国际信息技术有限公司 A kind of method and device of video data decoding
WO2018090573A1 (en) * 2016-11-18 2018-05-24 深圳市中兴微电子技术有限公司 Buffer space management method and device, electronic apparatus, and storage medium
CN107483976A (en) * 2017-09-26 2017-12-15 武汉斗鱼网络科技有限公司 Live management-control method, device and electronic equipment
CN109246486A (en) * 2018-10-16 2019-01-18 视联动力信息技术股份有限公司 A kind of framing method and device
CN110012363A (en) * 2019-04-18 2019-07-12 浙江工业大学 A kind of video chat system based on Session Initiation Protocol

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113903297A (en) * 2021-12-07 2022-01-07 深圳金采科技有限公司 Display control method and system of LED display screen
CN114422866A (en) * 2022-01-17 2022-04-29 深圳Tcl新技术有限公司 Video processing method and device, electronic equipment and storage medium
CN114422866B (en) * 2022-01-17 2023-07-25 深圳Tcl新技术有限公司 Video processing method and device, electronic equipment and storage medium
CN116055803A (en) * 2022-07-29 2023-05-02 荣耀终端有限公司 Video playing method and system and electronic equipment
CN116055803B (en) * 2022-07-29 2024-04-02 荣耀终端有限公司 Video playing method and system and electronic equipment

Similar Documents

Publication Publication Date Title
CN113099272A (en) Video processing method and device, electronic equipment and storage medium
KR101526081B1 (en) System and method for controllably viewing digital video streams captured by surveillance cameras
CN108063773B (en) Application service access method and device based on mobile edge computing
US20090096927A1 (en) System and method for video coding using variable compression and object motion tracking
CN108924491B (en) Video stream processing method and device, electronic equipment and storage medium
CN113141514B (en) Media stream transmission method, system, device, equipment and storage medium
CN106998485B (en) Video live broadcasting method and device
WO2022111198A1 (en) Video processing method and apparatus, terminal device and storage medium
EP3185480B1 (en) Method and apparatus for processing network jitter, and terminal device
CN113727185B (en) Video frame playing method and system
JP2017503399A (en) Handling of video frames damaged by camera movement
WO2021196994A1 (en) Encoding method and apparatus, terminal, and storage medium
CN110996122B (en) Video frame transmission method, device, computer equipment and storage medium
CN111343503B (en) Video transcoding method and device, electronic equipment and storage medium
CN112203126B (en) Screen projection method, screen projection device and storage medium
CN113259729B (en) Data switching method, server, system and storage medium
CN111277864B (en) Encoding method and device of live data, streaming system and electronic equipment
CN112449239B (en) Video playing method and device and electronic equipment
US9830946B2 (en) Source data adaptation and rendering
CN112954348B (en) Video encoding method and device, electronic equipment and storage medium
CN115460436B (en) Video processing method, storage medium and electronic device
CN112437279B (en) Video analysis method and device
CN112995780B (en) Network state evaluation method, device, equipment and storage medium
CN113115074B (en) Video jamming processing method and device
CN112995649B (en) Network terminal and network terminal evaluating method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination