CN112887754B - Video data processing method, device, equipment and medium based on real-time network - Google Patents

Video data processing method, device, equipment and medium based on real-time network Download PDF

Info

Publication number
CN112887754B
CN112887754B CN202110464876.1A CN202110464876A CN112887754B CN 112887754 B CN112887754 B CN 112887754B CN 202110464876 A CN202110464876 A CN 202110464876A CN 112887754 B CN112887754 B CN 112887754B
Authority
CN
China
Prior art keywords
frame
time
frames
network
real
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110464876.1A
Other languages
Chinese (zh)
Other versions
CN112887754A (en
Inventor
陈辉
杜沛力
张智
熊章
雷奇文
艾伟
胡国湖
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan Xingxun Intelligent Technology Co ltd
Original Assignee
Wuhan Xingxun Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan Xingxun Intelligent Technology Co ltd filed Critical Wuhan Xingxun Intelligent Technology Co ltd
Priority to CN202110464876.1A priority Critical patent/CN112887754B/en
Publication of CN112887754A publication Critical patent/CN112887754A/en
Application granted granted Critical
Publication of CN112887754B publication Critical patent/CN112887754B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/2187Live feed
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/60Network streaming of media packets
    • H04L65/70Media network packetisation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/80Responding to QoS
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/60Network structure or processes for video distribution between server and client or between remote clients; Control signalling between clients, server and network components; Transmission of management data between server and client, e.g. sending from server to client commands for recording incoming content stream; Communication details between server and client 
    • H04N21/63Control signaling related to video distribution between client, server and network components; Network processes for video distribution between server and clients or between remote clients, e.g. transmitting basic layer and enhancement layers over different transmission paths, setting up a peer-to-peer communication via Internet between remote STB's; Communication protocols; Addressing
    • H04N21/647Control signaling between network components and server or clients; Network processes for video distribution between server and clients, e.g. controlling the quality of the video stream, by dropping packets, protecting content from unauthorised alteration within the network, monitoring of network load, bridging between two different networks, e.g. between IP and wireless
    • H04N21/64723Monitoring of network processes or resources, e.g. monitoring of network load
    • H04N21/64738Monitoring network characteristics, e.g. bandwidth, congestion level
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/845Structuring of content, e.g. decomposing content into time segments
    • H04N21/8455Structuring of content, e.g. decomposing content into time segments involving pointers to the content, e.g. pointers to the I-frames of the video stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/845Structuring of content, e.g. decomposing content into time segments
    • H04N21/8456Structuring of content, e.g. decomposing content into time segments by decomposing the content in the time domain, e.g. in time segments

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Computer Security & Cryptography (AREA)
  • Databases & Information Systems (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

The invention belongs to the technical field of video data processing, solves the technical problem of poor user experience effect caused by video blockage or screen splash due to network difference during video, and provides a video data processing method, a device, equipment and a medium based on a real-time network. Detecting the network bandwidth in real time and outputting the real-time bandwidth information of the network; selecting a video real-time data transmission party matched with the real-time bandwidth information to transmit video data; the video data comprises image frames for buffering and image frames for transmitting. The invention also includes devices, apparatuses and media for performing the above methods. By adopting the method, the transmitted image can be ensured to be decoded normally under the condition of not reducing the image quality when the network is poor, thereby eliminating the mosaic phenomenon caused by poor network, and fully utilizing the bandwidth resource when the network is good; the smoothness of the video process is ensured, and the video experience effect of a user is improved.

Description

Video data processing method, device, equipment and medium based on real-time network
Technical Field
The present invention relates to the field of video data processing technologies, and in particular, to a method, an apparatus, a device, and a medium for processing video data based on a real-time network.
Background
With the development of video technology, a mobile terminal is used in the market to check real-time pictures acquired by a camera terminal in real time, so that the activity condition of a target object in a camera monitoring area is mastered; the technology has very important significance in nursing old people and infants, can improve the nursing efficiency of the infants and the old people, and the specific process of checking the real-time picture of the camera terminal at the mobile terminal is that the camera terminal caches the acquired real-time picture in the form of image frames to form a video stream; the complete video stream comprises a plurality of I frames and a plurality of P frames, wherein the I frames are key frames, and the P frames are basic frames; a plurality of P frames are corresponding to the back of each I frame; take the video frame rate of 20 frames/second as an example: 1I frame and 19P frames are included in 1 second; uploading the frame images at the camera terminal according to the caching sequence, and downloading and decoding the frame images by the mobile terminal according to the uploading sequence; the decoding process is that I frame is decoded first, and P frame is decoded successfully after I frame is decoded successfully; if the decoding fails or the decoding speed is slow, the phenomenon of blocking or screen splash can occur at the mobile terminal; the cause of the phenomena of pause and screen splash is that the network bandwidth is insufficient due to network difference, so that the decoding of the same image frame takes longer time; therefore, aiming at the video blocking and screen splash phenomena, the code rate is reduced in the video process; the quality of each frame of image is reduced by reducing the code rate, so that the bandwidth required for decoding each frame of image is reduced, the normal decoding of each frame of image is ensured, and the phenomena of pause and screen splash are eliminated.
Disclosure of Invention
In view of this, embodiments of the present invention provide a video data processing method, apparatus, device and medium based on a real-time network, so as to solve the technical problem in the prior art that a network difference causes a video pause or screen splash phenomenon during video, which causes a poor user experience effect.
The technical scheme adopted by the invention is as follows:
the invention provides a video data processing method based on a real-time network, which comprises the following steps:
detecting the network bandwidth in real time and outputting real-time bandwidth information of the network;
selecting a video real-time data transmission mode matched with the real-time bandwidth information according to the real-time bandwidth information;
transmitting video data according to the video real-time data transmission mode;
wherein the video data comprises image frames for buffering and image frames for transmission.
Preferably, when the video transmission code rate and the video frame rate are not changed, the selecting, according to the real-time bandwidth information, a video real-time data transmission mode matched with the real-time bandwidth information includes:
determining the coding interval time of the I frame according to the real-time bandwidth information;
outputting the number of P frames between adjacent I frames according to the real-time bandwidth information and the coding interval time of the I frames;
generating the transmission mode according to the number of the P frames and the coding interval time of the I frames and by combining the real-time bandwidth information;
wherein, the I frame is a key frame of the video, and the P frame is a common frame of the video.
Preferably, the generating the transmission mode according to the number of P frames and the coding interval time of the I frame and by combining the real-time bandwidth information includes:
the first transmission mode represents that the network transmission is smooth;
the second transmission mode represents that the network transmission is normal;
a third transmission mode for representing network transmission congestion;
wherein the content of the first and second substances,
the first transmission mode is as follows: the I frame and the P frame corresponding to the I frame are transmitted, and the interval between two adjacent I frames is at most one unit time;
the second transmission mode is as follows: the I frame and the P frame corresponding to the I frame are transmitted, and at least two unit times are arranged between two adjacent I frames;
the third transmission mode is as follows: only I frames are transmitted, P frames corresponding to the I frames are not transmitted, and at least two unit times are arranged between every two adjacent I frames.
Preferably, if the transmission mode is the second transmission mode and the network bandwidth in the transmission process is lower than the network bandwidth of the second transmission mode, the second transmission mode further includes:
acquiring first time information corresponding to the network bandwidth lower than the normal network bandwidth;
acquiring second time information corresponding to the network bandwidth recovered to the normal network bandwidth;
according to the first time information, suspending uploading of a P frame after the current I frame corresponding to the first time information;
and starting to sequentially upload the P frames after the current I frame corresponding to the second time information according to the second time information.
Preferably, the current I frame corresponding to the first time information is recorded as a first I frame, and the current I frame corresponding to the second time information is recorded as a second I frame;
if the first I frame and the second I frame are different frames, not uploading all P frames behind the P frame corresponding to the first time information behind the first I frame;
the non-uploaded P frame is a P frame corresponding to the first I frame;
if the first I frame and the second I frame are the same frame, continuing to upload the rest P frames corresponding to the first I frame from the P frame corresponding to the first time information.
Preferably, if the first I frame and the second I frame are different frames, not uploading all P frames after the P frame corresponding to the first time information after the first I frame includes:
acquiring all image frames from the first time information to the second time information;
analyzing the picture of each image frame;
if each frame image is a non-still picture, inserting all P frames behind the P frame corresponding to the first time information behind the second I frame;
and if each frame image is a still picture, discarding all P frames which belong to the first I frame after the first time information.
Preferably, if each frame image is a non-still picture, all P frames following the P frame corresponding to the first time information inserted after the second I frame include:
acquiring all P frame images which belong to the first I frame after the first time information and a first P frame image which belongs to the second I frame;
analyzing the image picture of each frame of image, and inserting a P frame corresponding to the first time information after the second I frame;
the P frames to be inserted are all P frames which are different from the image picture of the first P frame after the second I frame, and the pictures of the P frames are different.
The invention also provides a video data processing device based on the real-time network, which comprises:
a network detection module: the system is used for detecting the network bandwidth in real time and outputting the real-time bandwidth information of the network;
the video processing module: the video real-time data transmission mode matched with the real-time bandwidth information is selected according to the real-time bandwidth information;
a data transmission module: the video data transmission is carried out according to the video real-time data transmission mode;
wherein the video data comprises image frames for buffering and image frames for transmission.
The present invention also provides an electronic device, comprising: at least one processor, at least one memory, and computer program instructions stored in the memory that, when executed by the processor, implement the method of any of the above.
The invention also provides a medium having stored thereon computer program instructions which, when executed by a processor, implement the method of any of the above.
In conclusion, the beneficial effects of the invention are as follows:
the invention provides a video data processing method, a device, equipment and a medium based on a real-time network, which are used for detecting the network in real time during video to obtain real-time bandwidth information of the network, and matching a transmission mode corresponding to the broadband information aiming at the real-time broadband information during video, wherein the interval time of key frames of the video is short when the network is smooth and real-time, the interval time of the key frames of the video is long when the network is normal, the interval time of the key frames of the video is long when the network is poor, and only the key frames are transmitted; the smoothness of the video process is ensured, and the video experience effect of a user is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings required to be used in the embodiments of the present invention will be briefly described below, and for those skilled in the art, without any creative effort, other drawings may be obtained according to the drawings, and these drawings are all within the protection scope of the present invention.
Fig. 1 is a schematic flowchart of a video data processing method based on a real-time network according to embodiment 1 of the present invention;
fig. 2 is a schematic flow chart illustrating acquisition of a transmission mode in embodiment 1 of the present invention;
fig. 3 is a schematic flow chart illustrating a data transmission method when I frames are different before and after network recovery in embodiment 1 of the present invention;
fig. 4 is a schematic flowchart of analyzing untransmitted image frames according to embodiment 1 of the present invention;
fig. 5 is a flowchart illustrating a process of inserting a P frame not transmitted to a previous I frame into a subsequent I frame according to embodiment 1 of the present invention;
fig. 6 is a flowchart illustrating a data transmission method when I frames are the same before and after network recovery in embodiment 1 of the present invention;
fig. 7 is a schematic structural diagram of a video apparatus based on a real-time network according to embodiment 2 of the present invention;
fig. 8 is a schematic structural diagram of an electronic device in embodiment 3 of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention. It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. In the description of the present invention, it is to be understood that the terms "center", "upper", "lower", "front", "rear", "left", "right", "vertical", "horizontal", "top", "bottom", "inner", "outer", and the like indicate orientations or positional relationships based on those shown in the drawings, and are only for convenience of description and simplicity of description, and do not indicate or imply that the referenced devices or elements must have a particular orientation, be constructed in a particular orientation, and be operated, and thus, are not to be construed as limiting the present invention. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element. In case of conflict, it is intended that the embodiments of the present invention and the individual features of the embodiments may be combined with each other within the scope of the present invention.
Implementation mode one
Example 1
Referring to fig. 1, fig. 1 is a schematic flowchart of a video data processing method based on a real-time network according to embodiment 1 of the present invention; the method comprises the following steps:
s10: detecting the network bandwidth in real time and outputting real-time bandwidth information of the network;
in one embodiment, the real-time bandwidth information includes network smoothness, network normality, and network congestion.
Specifically, a real-time picture of a monitoring area corresponding to a camera is checked through an App end of a mobile phone, each image frame corresponding to a shot real-time picture is cached and uploaded by the camera, the uploaded image frame is downloaded and decoded by a mobile end, so that the monitoring picture of the monitoring area of a camera terminal is checked remotely in real time, when a real-time video is checked, bandwidth information of a network is detected in real time, real-time bandwidth information is obtained, and the real-time bandwidth information at least comprises one of the following: smooth network, normal network, congested network, no network and the like.
S11: selecting a video real-time data transmission mode matched with the real-time bandwidth information according to the real-time bandwidth information;
specifically, when the network is smooth, the interval time of key frames (I frames) is shortened, video image data is transmitted at a normal code rate and a normal frame rate, and the loss of image data can be prevented from being reduced when the decoding of the image frames fails; when the network is normal, the interval time of the I frames is prolonged, so that more common frames (P frames) are transmitted between the adjacent I frames, and the network bandwidth resources are inclined to the P frames in transmission and decoding; the video quality is mainly embodied by P frames, so that the transmission and decoding of the P frames are ensured, when a network is congested, the interval time of the I frames is prolonged, the camera terminal only uploads the cached I frames but not the P frames, the limited network bandwidth is used for transmitting the key frame I frames, the video can be ensured to be normally carried out, and the image quality of the video is not reduced.
In an embodiment, referring to fig. 2, when the bitrate of video transmission and the frame rate of the video are not changed, the S11 includes:
s111: determining the coding interval time of the I frame according to the real-time bandwidth information;
specifically, data of an I frame is larger than a P frame, the P frame needs to be decoded successfully after the corresponding I frame is decoded, and under the condition that the code rate of the transmitted video is fixed, the encoding interval time of the I frame is determined according to the real-time bandwidth information, for example: the smoothness of the network can shorten the coding interval time of the I frame, can prevent that when the decoding of the image frame fails, the lost image data is reduced, the coding interval time of the I frame can be prolonged due to network congestion and/or normal network, more network resources for transmitting the I frame can be released, and when the network is normal, the interval time of the I frame is prolonged, so that more common frames (P frames) are transmitted between the adjacent I frames, and the network bandwidth resources are transmitted to the P frames and are decoded obliquely; the video quality is mainly embodied by P frames, so that the transmission and decoding of the P frames are ensured, when a network is congested, the interval time of the I frames is prolonged, the camera terminal only uploads the cached I frames but not the P frames, the limited network bandwidth is used for transmitting the key frame I frames, the video can be ensured to be normally carried out, and the image quality of the video is not reduced.
It should be noted that: because the I frame is a full-frame compression decoding frame, the I frame transmits full-frame image information after JPEG compression coding, a complete image can be reconstructed as long as the decoding of the I frame is completed, the information content of the I frame is large, all I frames can be received at App ends with smooth network, normal network and network congestion, thereby ensuring the integrity of video information displayed on a user plane, the P frame is a forward prediction coding frame, the prediction and reconstruction of the P frame are performed with inter-frame coding by taking the I frame and/or a previous P frame as reference and are used for representing a certain detail of the image, because the P frame data is far smaller than the I frame data, the transmission of the P frame has lower requirement on network bandwidth, when the network is normal (devices using the network are in conventional quantity), the P frames corresponding to 1I frame are increased, the I frame quantity can be reduced, the P frame quantity can be used for releasing network resources, the P frame can be increased and can display the image detail, the integral code rate of the image is reduced, and meanwhile, the experience effect in video is ensured; when the network is congested, in order to ensure that the video can be carried out, only an I frame of larger data is sent, and the integrity of the data is ensured; and the P frame is not sent, so that the condition that the network allocates bandwidth to send the P frame data is avoided, and the mosaic phenomenon of the App end is eliminated.
Meanwhile, at the App end, only after the corresponding I frame is decoded successfully, the section of video after the I frame and all the P frames after the corresponding I frame can be played.
S112: outputting the number of P frames between adjacent I frames according to the real-time bandwidth information and the coding interval time of the I frames;
specifically, after the encoding time of an I frame is determined according to the frame rate of the video, the number of P frames existing between adjacent I frames can be determined; then, the transmission quantity of P frames between adjacent I frames is determined by combining the real-time broadband information, and the transmission quantity at least comprises one of the following: full transmission, partial transmission, or no transmission; such as: the frame rate is 20 frames/second, the interval between adjacent I frames is 1 second, the 1 st frame is an I frame, the later 19 frames are P frames, namely 19P frames are arranged between adjacent I frames; the frame rate is 20 frames/second, the interval between adjacent I frames is 4 seconds, the 1 st frame is an I frame, the later 79 frames are P frames, namely 79P frames are arranged between adjacent I frames; when the network is smooth, 19P frames are selected from adjacent I frames at intervals, and all P frames are transmitted, so that smooth, coherent and clear video can be ensured when the network is smooth, and meanwhile, the image data is less lost after the decoding of abnormal image frames fails; when the network is normal, 79P frames are selected from adjacent I frames at intervals, and all P frames are transmitted, so that smooth video and clear and coherent video pictures can be ensured when the network is normal; when the network is congested, 79P frames are selected between adjacent I frames, and the P frames are not uploaded or are partially uploaded; the video fluency and the video picture quality can be ensured when the network is congested.
S113: generating the transmission mode according to the number of the P frames and the coding interval time of the I frames and by combining the real-time bandwidth information;
wherein, the I frame is a key frame of the video, and the P frame is a common frame of the video.
Specifically, according to a buffering mechanism of video data, each video stream includes an I frame and a plurality of P frames, and only after the I frame is decoded, the subsequent P frame can be played.
It should be noted that: each video segment includes 1I frame and is located in the first frame of the video stream, and it is understood that the I frame is the first picture of the video stream.
In one embodiment, the S113 includes:
the first transmission mode represents that the network transmission is smooth;
the second transmission mode represents that the network transmission is normal;
a third transmission mode for representing network transmission congestion;
wherein the content of the first and second substances,
the first transmission mode is as follows: the I frame and the P frame corresponding to the I frame are transmitted, and the interval between two adjacent I frames is at most one unit time;
the second transmission mode is as follows: the I frame and the P frame corresponding to the I frame are transmitted, and at least two unit times are arranged between two adjacent I frames;
the third transmission mode is as follows: only I frames are transmitted, P frames corresponding to the I frames are not transmitted, and at least two unit times are arranged between every two adjacent I frames.
Specifically, the network is divided into smooth network, normal network and network congestion according to different network bandwidth information; the frame rate is 20 frames/second, the interval between adjacent I frames is 1 second, namely each unit time comprises an I frame, the I frame is the first frame, the last 19 frames are P frames, namely 19P frames are arranged between the adjacent I frames; when the network is smooth, 19P frames are selected from adjacent I frames at intervals, and all P frames are transmitted, so that smooth, coherent and clear video can be ensured when the network is smooth, and meanwhile, the image data is less lost after the decoding of abnormal image frames fails; when the network is normal, only one I frame is included in a plurality of continuous unit time, the first frame in the first unit time, and the other frames are P frames, if the interval between adjacent I frames is 4 seconds, the interval between adjacent I frames is 79P frames, and the I frames and the 79P frames are all transmitted, so that smooth video and clear and coherent video pictures can be ensured when the network is normal; when the network is congested, only one I frame is included in a plurality of continuous unit time, the first frame in the first unit time, and the rest frames are P frames, if the interval between adjacent I frames is 4 seconds, 79P frames are arranged between adjacent I frames, and the P frames are not uploaded or are partially uploaded; the video fluency and the video picture quality can be ensured when the network is congested; setting unit time as 1 second, frame rate as 20 frames/second, wherein the local caching mechanism of the camera corresponding to the first transmission mode is that 19P frames are continuously cached after one I frame is cached, the local caching mechanisms of the cameras corresponding to the second transmission mode and the third transmission mode are both, one I frame is cached, then (n 20+ 19) P frames are cached, and n is a positive integer greater than 1; in a second transmission mode, sequentially uploading the buffered 1I frame and (n × 20+ 19) P frames according to the sequence of the buffer, and then downloading and decoding on the mobile terminal App; in a third transmission mode, 1 buffered I frame is uploaded, then the I frame is downloaded and decoded on a mobile terminal App, and (n × 20+ 19) P frames are not uploaded, discarded or temporarily stored, then the next I frame and the subsequent P frame are buffered, the process is repeated continuously, and the real-time picture is remotely checked.
In an embodiment, referring to fig. 3, if the transmission mode is the second transmission mode and the network bandwidth in the transmission process is lower than the network bandwidth of the second transmission mode, the second transmission mode further includes:
s30: acquiring first time information corresponding to the network bandwidth lower than the normal network bandwidth;
s40: acquiring second time information corresponding to the network bandwidth recovered to the normal network bandwidth;
s50: according to the first time information, suspending uploading of a P frame after the current I frame corresponding to the first time information;
specifically, in the video process, the network bandwidth is detected in real time, when the network speed of the network is lower than a network bandwidth threshold value set by the normal network speed, the corresponding time is recorded as first time information, at this time, image frames in the real-time video generated by the camera terminal are judged, if the image frames are I frames, the image frames are cached and then uploaded, and if the image frames are P frames, the image frames are cached only in the local part of the camera, and the P frames are not uploaded any more.
S60: according to the second time information, starting to sequentially upload the P frames after the current I frame corresponding to the second time information;
the camera terminal uploads the locally cached image frames to a third-party processing mechanism, which may be a sending mechanism corresponding to the internet and/or a local area network.
Specifically, in the video process, the network bandwidth is detected in real time, when the network speed of the network is restored to the network bandwidth threshold of the normal network from the network bandwidth threshold set lower than the normal network speed, the corresponding time is recorded as second time information, the I frame of the second time information is obtained, and the P frames after the I frame are sequentially uploaded.
In an embodiment, the current I frame corresponding to the first time information is recorded as a first I frame, and the current I frame corresponding to the second time information is recorded as a second I frame;
if the first I frame and the second I frame are different frames, not uploading all P frames behind the P frame corresponding to the first time information behind the first I frame;
the non-uploaded P frame is a P frame corresponding to the first I frame;
if the first I frame and the second I frame are the same frame, continuing to upload the rest P frames corresponding to the first I frame from the P frame corresponding to the first time information.
Specifically, in the video data transmission process, when the network bandwidth is lower than the normal network bandwidth, recording the time as first time information, recording a corresponding I frame at the moment as a first I frame, when the network bandwidth is restored to the normal network bandwidth, recording the time as second time information, recording a corresponding second I frame of the I frame at the moment, and if the two I frames are different, continuing uploading after the first I frame without continuing a P frame belonging to the first I frame; and if the two I frames are the same I frame, continuing to upload the P frame belonging to the first I frame and caching the P frame corresponding to the local first time information of the camera.
In an embodiment, referring to fig. 4, if the first I frame and the second I frame are the same frame, continuing to upload the remaining P frames corresponding to the first I frame from the P frame corresponding to the first time information includes:
s01: acquiring all image frames from the first time information to the second time information;
specifically, all image frames cached at the camera terminal in the network congestion period are acquired, and the image frames comprise a P frame or a P frame and an I frame;
s02: analyzing the picture of each image frame;
specifically, at the camera terminal, the image detection model preset in the camera is utilized to analyze the picture of each frame of image, and whether each frame of image is consistent or not is judged, namely whether personnel activities exist or not is judged; the human activity includes at least one of: increase and decrease of people, movement of limbs, etc.
S03: if each frame image is a non-still picture, inserting all P frames behind the P frame corresponding to the first time information behind the second I frame;
specifically, when all the frame images are found to be the same frame, it is considered that there is no change in the monitored frame in the period, and all the P frames from the first time information to the second I frame are inserted into the second I frame, specifically, between the second I frame and the first P frame of the second I frame.
In one embodiment, referring to fig. 5, the S03 includes:
s031: acquiring all P frame images which belong to the first I frame after the first time information and a first P frame image which belongs to the second I frame;
s032: analyzing the image picture of each frame of image, and inserting a P frame corresponding to the first time information after the second I frame;
the P frames to be inserted are all P frames which are different from the image picture of the first P frame after the second I frame, and the pictures of the P frames are different.
Specifically, an image of a P frame between the first time information and the second I frame is compared with an image picture of a first P frame of the second I frame, all image frames different from the first P frame of the second I frame are screened out, pictures between the P frames are different, the P frames are marked as picture-moving image frames, and the picture-moving image frames are inserted between the second I frame and the first P frame of the second I frame according to the cache sequence, so that loss of key pictures can be reduced and prevented, and the video quality is ensured.
S04: and if each frame image is a still picture, discarding all P frames which belong to the first I frame after the first time information.
S12: transmitting video data according to the video real-time data transmission mode;
wherein the video data comprises image frames for buffering and image frames for transmission.
In one embodiment, referring to fig. 6, the S10 includes:
s101: acquiring the number of image frames cached by a camera terminal and the number of image frames downloaded by a mobile terminal in real time;
s102: and obtaining the real-time network bandwidth information according to the relation between the number of the image frames downloaded by the mobile terminal and the number of the image frames cached by the camera terminal.
Specifically, when the network is stable, the number of I frames and P frames between each section of video is the same, the position information of the image frame being downloaded and decoded by the App at the moment can be compared with the position information of the image frame being uploaded by the camera terminal at the moment, and according to the one-to-one correspondence relationship in the stable network transmission state, whether the network is changed at the moment can be determined, so that real-time network broadband information is obtained.
By adopting the video data processing method based on the real-time network of the embodiment, the real-time network during video is detected in real time to obtain the real-time bandwidth information of the network, and aiming at the real-time broadband information during video, the transmission mode corresponding to the broadband information is matched, when the network is smooth, the interval time of key frames of the video is short, when the network is normal, the interval time of key frames of the video is long, when the network is poor, the interval time of key frames is long, and only the key frames are transmitted, by adopting the method, the transmitted images can be ensured to be decoded normally under the condition of not reducing the image quality when the network is poor, thereby eliminating the mosaic phenomenon caused by poor network, and when the network is good, the bandwidth resources can be fully utilized; the smoothness of the video process is ensured, and the video experience effect of a user is improved.
Example 2
Referring to fig. 7, fig. 7 is a schematic structural diagram of a video data processing apparatus based on a real-time network in embodiment 2 of the present invention, and embodiment 2 is a video data processing method based on a real-time network in embodiment 1, and further provides a video data processing apparatus based on a real-time network, where the apparatus includes:
a network detection module: the system is used for detecting the network bandwidth in real time and outputting the real-time bandwidth information of the network;
the video processing module: the video real-time data transmission mode matched with the real-time bandwidth information is selected according to the real-time bandwidth information;
a data transmission module: the video data transmission is carried out according to the video real-time data transmission mode;
wherein the video data comprises image frames for buffering and image frames for transmission.
By adopting the video data processing device based on the real-time network of the embodiment, the real-time network during video is detected in real time to obtain the real-time bandwidth information of the network, and aiming at the real-time broadband information during video, the transmission mode corresponding to the broadband information is matched, when the network is smooth, the interval time of key frames of the video is short, when the network is normal, the interval time of key frames of the video is long, when the network is poor, the interval time of key frames is long, and only key frames are transmitted, by adopting the method, the transmitted images can be ensured to be decoded normally under the condition of not reducing the image quality when the network is poor, thereby eliminating the mosaic phenomenon caused by poor network, and when the network is good, the bandwidth resources can be fully utilized; the smoothness of the video process is ensured, and the video experience effect of a user is improved.
In one embodiment, the real-time bandwidth information includes network smoothness, network normality, and network congestion.
In an embodiment, when the video transmission rate and the frame rate of the video are not changed, the video processing module includes:
an image frame encoding unit: determining the coding interval time of the I frame according to the real-time bandwidth information;
number of image frames unit: outputting the number of P frames between adjacent I frames according to the real-time bandwidth information and the coding interval time of the I frames;
an image frame transmission unit: generating the transmission mode according to the number of the P frames and the coding interval time of the I frames and by combining the real-time bandwidth information;
wherein, the I frame is a key frame of the video, and the P frame is a common frame of the video.
In one embodiment, the image frame transmission unit includes:
the first transmission mode represents that the network transmission is smooth;
the second transmission mode represents that the network transmission is normal;
a third transmission mode for representing network transmission congestion;
wherein the content of the first and second substances,
the first transmission mode is as follows: the I frame and the P frame corresponding to the I frame are transmitted, and the interval between two adjacent I frames is at most one unit time;
the second transmission mode is as follows: the I frame and the P frame corresponding to the I frame are transmitted, and at least two unit times are arranged between two adjacent I frames;
the third transmission mode is as follows: only I frames are transmitted, P frames corresponding to the I frames are not transmitted, and at least two unit times are arranged between every two adjacent I frames.
In an embodiment, if the transmission mode is the second transmission mode and the network bandwidth in the transmission process is lower than the network bandwidth of the second transmission mode, the second transmission mode further includes:
a first time unit: acquiring first time information corresponding to the network bandwidth lower than the normal network bandwidth;
a second time unit: acquiring second time information corresponding to the network bandwidth recovered to the normal network bandwidth;
a first time image frame unit: according to the first time information, suspending uploading of a P frame after the current I frame corresponding to the first time information;
a second temporal image frame unit: according to the second time information, starting to sequentially upload the P frames after the current I frame corresponding to the second time information;
the camera terminal uploads the locally cached image frames to a third-party processing mechanism, which may be a sending mechanism corresponding to the internet and/or a local area network.
In an embodiment, the current I frame corresponding to the first time information is recorded as a first I frame, and the current I frame corresponding to the second time information is recorded as a second I frame;
if the first I frame and the second I frame are different frames, not uploading all P frames behind the P frame corresponding to the first time information behind the first I frame;
the non-uploaded P frame is a P frame corresponding to the first I frame;
if the first I frame and the second I frame are the same frame, continuing to upload the rest P frames corresponding to the first I frame from the P frame corresponding to the first time information.
In an embodiment, if the first I frame and the second I frame are the same frame, continuing to upload the remaining P frames corresponding to the first I frame from the P frame corresponding to the first time information includes:
an image frame acquisition unit: acquiring all image frames from the first time information to the second time information;
an image picture analysis unit: analyzing the picture of each image frame;
image frame screening unit: if each frame image is a non-still picture, inserting all P frames behind the P frame corresponding to the first time information behind the second I frame;
in one embodiment, the image frame screening unit includes:
an image frame classification acquisition unit: acquiring all P frame images which belong to the first I frame after the first time information and a first P frame image which belongs to the second I frame;
an image frame insertion unit: analyzing the image picture of each frame of image, and inserting a P frame corresponding to the first time information after the second I frame;
the P frames to be inserted are all P frames which are different from the image picture of the first P frame after the second I frame, and the pictures of the P frames are different.
An image frame discarding unit: and if each frame image is a still picture, discarding all P frames which belong to the first I frame after the first time information.
Wherein the video data comprises image frames for buffering and image frames for transmission.
In one embodiment, the network detection unit comprises:
a data acquisition unit: acquiring the number of image frames cached by a camera terminal and the number of image frames downloaded by a mobile terminal in real time;
a data analysis unit: and obtaining the real-time network bandwidth information according to the relation between the number of the image frames downloaded by the mobile terminal and the number of the image frames cached by the camera terminal.
By adopting the video data processing device based on the real-time network of the embodiment, the real-time network during video is detected in real time to obtain the real-time bandwidth information of the network, and aiming at the real-time broadband information during video, the transmission mode corresponding to the broadband information is matched, when the network is smooth, the interval time of key frames of the video is short, when the network is normal, the interval time of key frames of the video is long, when the network is poor, the interval time of key frames is long, and only key frames are transmitted, by adopting the method, the transmitted images can be ensured to be decoded normally under the condition of not reducing the image quality when the network is poor, thereby eliminating the mosaic phenomenon caused by poor network, and when the network is good, the bandwidth resources can be fully utilized; the smoothness of the video process is ensured, and the video experience effect of a user is improved.
Example 3:
the present invention provides an electronic device and medium, as shown in fig. 8, comprising at least one processor, at least one memory, and computer program instructions stored in the memory.
Specifically, the processor may include a Central Processing Unit (CPU), or an Application Specific Integrated Circuit (ASIC), or may be configured to implement one or more Integrated circuits of the embodiments of the present invention, and the electronic device includes at least one of the following: the wearing equipment that camera, mobile device that has the camera, have the camera.
The memory may include mass storage for data or instructions. By way of example, and not limitation, memory may include a Hard Disk Drive (HDD), floppy Disk Drive, flash memory, optical Disk, magneto-optical Disk, magnetic tape, or Universal Serial Bus (USB) Drive or a combination of two or more of these. The memory may include removable or non-removable (or fixed) media, where appropriate. The memory may be internal or external to the data processing apparatus, where appropriate. In a particular embodiment, the memory is non-volatile solid-state memory. In a particular embodiment, the memory includes Read Only Memory (ROM). Where appropriate, the ROM may be mask-programmed ROM, Programmable ROM (PROM), Erasable PROM (EPROM), Electrically Erasable PROM (EEPROM), electrically rewritable ROM (EAROM), or flash memory or a combination of two or more of these.
The processor reads and executes the computer program instructions stored in the memory to implement any one of the video data processing methods based on the real-time network in the above embodiment modes.
In one example, the electronic device may also include a communication interface and a bus. The processor, the memory and the communication interface are connected through a bus and complete mutual communication.
The communication interface is mainly used for realizing communication among modules, devices, units and/or equipment in the embodiment of the invention.
A bus comprises hardware, software, or both that couple components of an electronic device to one another. By way of example, and not limitation, a bus may include an Accelerated Graphics Port (AGP) or other graphics bus, an Enhanced Industry Standard Architecture (EISA) bus, a Front Side Bus (FSB), a Hypertransport (HT) interconnect, an Industry Standard Architecture (ISA) bus, an infiniband interconnect, a Low Pin Count (LPC) bus, a memory bus, a Micro Channel Architecture (MCA) bus, a Peripheral Component Interconnect (PCI) bus, a PCI-Express (PCI-X) bus, a Serial Advanced Technology Attachment (SATA) bus, a video electronics standards association local (VLB) bus, or other suitable bus or a combination of two or more of these. A bus may include one or more buses, where appropriate. Although specific buses have been described and shown in the embodiments of the invention, any suitable buses or interconnects are contemplated by the invention.
In summary, embodiments of the present invention provide a method, an apparatus, a device, and a medium for processing video data based on a real-time network. The method comprises the steps of detecting a network in real time during video to obtain real-time bandwidth information of the network, matching a transmission mode corresponding to the bandwidth information aiming at the real-time bandwidth information during video, wherein the real-time bandwidth information is real-time when the network is smooth, the interval time of key frames of the video is short, the interval time of the key frames of the video is long when the network is normal, the interval time of the key frames is long when the network is poor, only the key frames are transmitted, and the method can ensure that transmitted images can be normally decoded under the condition that the image quality is not reduced when the network is poor, so that the mosaic phenomenon caused by poor network is eliminated, and bandwidth resources can be fully utilized when the network is good; the smoothness of the video process is ensured, and the video experience effect of a user is improved.
It is to be understood that the invention is not limited to the specific arrangements and instrumentality described above and shown in the drawings. A detailed description of known methods is omitted herein for the sake of brevity. In the above embodiments, several specific steps are described and shown as examples. However, the method processes of the present invention are not limited to the specific steps described and illustrated, and those skilled in the art can make various changes, modifications and additions or change the order between the steps after comprehending the spirit of the present invention.
The functional blocks shown in the above-described structural block diagrams may be implemented as hardware, software, firmware, or a combination thereof. When implemented in hardware, it may be, for example, an electronic circuit, an Application Specific Integrated Circuit (ASIC), suitable firmware, plug-in, function card, or the like. When implemented in software, the elements of the invention are the programs or code segments used to perform the required tasks. The program or code segments may be stored in a machine-readable medium or transmitted by a data signal carried in a carrier wave over a transmission medium or a communication link. A "machine-readable medium" may include any medium that can store or transfer information. Examples of a machine-readable medium include electronic circuits, semiconductor memory devices, ROM, flash memory, Erasable ROM (EROM), floppy disks, CD-ROMs, optical disks, hard disks, fiber optic media, Radio Frequency (RF) links, and so forth. The code segments may be downloaded via computer networks such as the internet, intranet, etc.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solution of the present invention, and not to limit the same; while the invention has been described in detail and with reference to the foregoing embodiments, it will be understood by those skilled in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present invention.

Claims (8)

1. A video data processing method based on a real-time network, the method comprising:
detecting the network bandwidth in real time and outputting real-time bandwidth information of the network;
selecting a video real-time data transmission mode matched with the real-time bandwidth information according to the real-time bandwidth information;
transmitting video data according to the video real-time data transmission mode;
when the video transmission code rate and the video frame rate are not changed, selecting a video real-time data transmission mode matched with the real-time bandwidth information according to the real-time bandwidth information comprises the following steps:
determining the coding interval time of the I frame according to the real-time bandwidth information;
outputting the number of P frames between adjacent I frames according to the real-time bandwidth information and the coding interval time of the I frames;
generating the transmission mode according to the number of the P frames and the coding interval time of the I frames and by combining the real-time bandwidth information;
wherein, the I frame is a key frame of the video, the P frame is a common frame of the video, and the encoding interval time comprises: when the network is smooth, the interval between the adjacent coded I frames is at most one unit time, and when the network is congested or the network is normal, the interval between the adjacent coded I frames is at least two unit times; when the network is smooth and normal, the I frame and the P frame corresponding to the I frame are both transmitted, and when the network is congested, only the I frame is transmitted, and the P frame corresponding to the I frame is not transmitted.
2. The video data processing method based on the real-time network according to claim 1, wherein a transmission mode corresponding to network smoothness is recorded as a first transmission mode, a transmission mode corresponding to network normal is recorded as a second transmission mode, and a transmission mode corresponding to network congestion is recorded as a third transmission mode; if the transmission mode is the second transmission mode and the network bandwidth in the transmission process is lower than that of the second transmission mode, the second transmission mode further comprises:
acquiring first time information corresponding to the network bandwidth lower than the normal network bandwidth;
acquiring second time information corresponding to the network bandwidth recovered to the normal network bandwidth;
according to the first time information, suspending uploading of a P frame after the current I frame corresponding to the first time information;
and starting to sequentially upload the P frames after the current I frame corresponding to the second time information according to the second time information.
3. The real-time network-based video data processing method of claim 2, wherein the current I frame corresponding to the first time information is recorded as a first I frame, and the current I frame corresponding to the second time information is recorded as a second I frame;
if the first I frame and the second I frame are different frames, not uploading all P frames behind the P frame corresponding to the first time information behind the first I frame;
the non-uploaded P frame is a P frame corresponding to the first I frame;
if the first I frame and the second I frame are the same frame, continuing to upload the rest P frames corresponding to the first I frame from the P frame corresponding to the first time information.
4. The method of claim 3, wherein if the first I frame and the second I frame are different frames, not uploading all P frames after the P frame corresponding to the first time information after the first I frame comprises:
acquiring all image frames from the first time information to the second time information;
analyzing the picture of each image frame;
if each frame image is a non-still picture, inserting all P frames behind the P frame corresponding to the first time information behind the second I frame;
and if each frame image is a still picture, discarding all P frames which belong to the first I frame after the first time information.
5. The method of claim 4, wherein the inserting all P frames after the P frame corresponding to the first time information after the second I frame comprises, if each frame is a non-still picture:
acquiring all P frame images which belong to the first I frame after the first time information and a first P frame image which belongs to the second I frame;
analyzing the image picture of each frame of image, and inserting a P frame corresponding to the first time information after the second I frame;
the P frames to be inserted are all P frames which are different from the image picture of the first P frame after the second I frame, and the pictures of the P frames are different.
6. A video data processing apparatus based on a real-time network, comprising:
a network detection module: the system is used for detecting the network bandwidth in real time and outputting the real-time bandwidth information of the network;
the video processing module: the video real-time data transmission mode matched with the real-time bandwidth information is selected according to the real-time bandwidth information;
a data transmission module: the video data transmission is carried out according to the video real-time data transmission mode;
when the video transmission code rate and the video frame rate are not changed, selecting a video real-time data transmission mode matched with the real-time bandwidth information according to the real-time bandwidth information comprises the following steps:
determining the coding interval time of the I frame according to the real-time bandwidth information;
outputting the number of P frames between adjacent I frames according to the real-time bandwidth information and the coding interval time of the I frames;
generating the transmission mode according to the number of the P frames and the coding interval time of the I frames and by combining the real-time bandwidth information;
wherein, the I frame is a key frame of the video, the P frame is a common frame of the video, and the encoding interval time comprises: when the network is smooth, the interval between the adjacent coded I frames is at most one unit time, and when the network is congested or the network is normal, the interval between the adjacent coded I frames is at least two unit times; when the network is smooth and normal, the I frame and the P frame corresponding to the I frame are both transmitted, and when the network is congested, only the I frame is transmitted, and the P frame corresponding to the I frame is not transmitted.
7. An electronic device, comprising: at least one processor, at least one memory, and computer program instructions stored in the memory that, when executed by the processor, implement the method of any of claims 1-5.
8. A computer-readable medium having stored thereon computer program instructions, which, when executed by a processor, implement the method of any one of claims 1-5.
CN202110464876.1A 2021-04-28 2021-04-28 Video data processing method, device, equipment and medium based on real-time network Active CN112887754B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110464876.1A CN112887754B (en) 2021-04-28 2021-04-28 Video data processing method, device, equipment and medium based on real-time network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110464876.1A CN112887754B (en) 2021-04-28 2021-04-28 Video data processing method, device, equipment and medium based on real-time network

Publications (2)

Publication Number Publication Date
CN112887754A CN112887754A (en) 2021-06-01
CN112887754B true CN112887754B (en) 2021-08-13

Family

ID=76040687

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110464876.1A Active CN112887754B (en) 2021-04-28 2021-04-28 Video data processing method, device, equipment and medium based on real-time network

Country Status (1)

Country Link
CN (1) CN112887754B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113873235A (en) * 2021-06-11 2021-12-31 宁波星巡智能科技有限公司 Adaptive network video transmission method, apparatus, device and medium
CN113489989B (en) * 2021-06-30 2023-08-11 宁波星巡智能科技有限公司 Video data transmission method, device, equipment and medium during battery camera awakening
CN115278376B (en) * 2022-05-25 2024-03-22 西安万像电子科技有限公司 Audio and video data transmission method and device
CN116708753B (en) * 2022-12-19 2024-04-12 荣耀终端有限公司 Method, device and storage medium for determining preview blocking reason
CN116634089B (en) * 2023-07-24 2023-11-03 苏州浪潮智能科技有限公司 Video transmission method, device, equipment and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101778426A (en) * 2010-01-21 2010-07-14 深圳市同洲电子股份有限公司 Method and equipment for video data stream transmission in mobile wireless network
CN101909210A (en) * 2009-12-17 2010-12-08 新奥特(北京)视频技术有限公司 Network streaming media server and low-bandwidth high-quality solution thereof
CN101924924A (en) * 2010-07-28 2010-12-22 厦门雅迅网络股份有限公司 Adaptive transmission method and system for wireless remote video monitoring

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101527834A (en) * 2009-03-26 2009-09-09 浙江大华技术股份有限公司 Wireless narrowband network video transmission method
CN102196249B (en) * 2011-05-17 2012-12-05 浙江宇视科技有限公司 Monitoring data playback method, EC (Encoder) and video management server
US8904024B2 (en) * 2012-08-03 2014-12-02 Ittiam Systems (P) Ltd System and method for low delay fast update for video streaming
US11089373B2 (en) * 2016-12-29 2021-08-10 Sling Media Pvt Ltd Seek with thumbnail generation and display during placeshifting session

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101909210A (en) * 2009-12-17 2010-12-08 新奥特(北京)视频技术有限公司 Network streaming media server and low-bandwidth high-quality solution thereof
CN101778426A (en) * 2010-01-21 2010-07-14 深圳市同洲电子股份有限公司 Method and equipment for video data stream transmission in mobile wireless network
CN101924924A (en) * 2010-07-28 2010-12-22 厦门雅迅网络股份有限公司 Adaptive transmission method and system for wireless remote video monitoring

Also Published As

Publication number Publication date
CN112887754A (en) 2021-06-01

Similar Documents

Publication Publication Date Title
CN112887754B (en) Video data processing method, device, equipment and medium based on real-time network
Fouladi et al. Salsify:{Low-Latency} network video through tighter integration between a video codec and a transport protocol
US11785148B2 (en) Data transmission control method, information sending end and receiving end and aerial vehicle image transmission system
US10009630B2 (en) System and method for encoding video content using virtual intra-frames
CN110784740A (en) Video processing method, device, server and readable storage medium
CN108347645B (en) Method and device for decoding and displaying video frame
CN108347580B (en) Method for processing video frame data and electronic equipment
CN111263192A (en) Video processing method and related equipment
CN110519640B (en) Video processing method, encoder, CDN server, decoder, device, and medium
CN111131817A (en) Screen sharing method, device, storage medium and screen sharing system
CN112601072A (en) Method and device for evaluating video service quality
CN112423140A (en) Video playing method and device, electronic equipment and storage medium
CN107800989B (en) Video display method and system based on dynamic frame rate detection and network video recorder
CN103916620A (en) Method and device for video call and mobile terminal
CN113794903A (en) Video image processing method and device and server
CN111541514B (en) Message transmission method and device
CN113099272A (en) Video processing method and device, electronic equipment and storage medium
CN109862400B (en) Streaming media transmission method, device and system
CN105898358B (en) The sending method and device of video data
CN113259660B (en) Video compression transmission method, device, equipment and medium based on dynamic coding frame
CN108282674B (en) Video transmission method, terminal and system
JP2002027462A (en) Moving picture receiver and video output device
CN115134582A (en) Video quality evaluation method and device
CN115277497B (en) Transmission delay time measurement method, device, electronic equipment and storage medium
US20230077785A1 (en) Communication control system and communication control method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant