CN115643366A - Video transmission method, device, equipment and computer program product - Google Patents

Video transmission method, device, equipment and computer program product Download PDF

Info

Publication number
CN115643366A
CN115643366A CN202110816052.6A CN202110816052A CN115643366A CN 115643366 A CN115643366 A CN 115643366A CN 202110816052 A CN202110816052 A CN 202110816052A CN 115643366 A CN115643366 A CN 115643366A
Authority
CN
China
Prior art keywords
information
rate
collision
frame
video
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110816052.6A
Other languages
Chinese (zh)
Inventor
李训文
楼坚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Mobile Communications Group Co Ltd
China Mobile Group Zhejiang Co Ltd
Original Assignee
China Mobile Communications Group Co Ltd
China Mobile Group Zhejiang Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Mobile Communications Group Co Ltd, China Mobile Group Zhejiang Co Ltd filed Critical China Mobile Communications Group Co Ltd
Priority to CN202110816052.6A priority Critical patent/CN115643366A/en
Publication of CN115643366A publication Critical patent/CN115643366A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

The invention discloses a video transmission method, a device, equipment and a computer program product, wherein the video transmission method comprises the following steps: when detecting that multiple cameras simultaneously transmit videos in a single cell, acquiring rate information of a video stream sent by the single cell; if the I frame collision behavior of the key frame is detected to exist based on the speed information, performing collision analysis based on the speed information to obtain identification information and collision time information of a camera in which I frame collision occurs; and performing I frame time sequence adjustment on the multiple paths of cameras based on the camera identification information and the collision time information. The invention avoids the I frame collision behavior of video transmission, thereby reducing the peak rate of the video stream and further reducing the network transmission bandwidth requirement.

Description

Video transmission method, device, equipment and computer program product
Technical Field
The present invention relates to the field of video technologies, and in particular, to a video transmission method, apparatus, device, and computer program product.
Background
With the rapid development of communication network technology, in order to meet the needs of industry customers, a plurality of cameras are usually deployed in a region, that is, a plurality of cameras are deployed in a cell or under a base station, for example, a plurality of cameras are deployed in a residential community, and the video return addresses of the plurality of cameras are usually the same. With the increase of the number of cameras under a cell or a base station, multiple cameras transmit videos simultaneously, so that the density of I frames (Intra Coded Pictures) transmitted by a network is increased, and the probability of I frame collision is increased rapidly, so that the peak rate of video stream is high, and the problems of video delay increase, blocking, screen splash and the like are caused. Therefore, how to avoid I-frame collision to reduce the peak rate of multiple camera video streams is an urgent problem to be solved at present.
Disclosure of Invention
The invention mainly aims to provide a video transmission method, a video transmission device, video transmission equipment and a computer program product, and aims to avoid I frame collision behavior of video transmission so as to reduce the peak rate of a video stream and further reduce the network transmission bandwidth requirement.
In order to achieve the above object, the present invention provides a video transmission method, including the steps of:
when detecting that multiple cameras simultaneously transmit videos in a single cell, acquiring rate information of a video stream sent by the single cell;
if the I frame collision behavior of the key frame is detected to exist based on the speed information, performing collision analysis based on the speed information to obtain identification information and collision time information of a camera in which I frame collision occurs;
and performing I frame time sequence adjustment on the multiple paths of cameras based on the camera identification information and the collision time information.
Optionally, the rate information includes a mean rate and a plurality of measurement rates in a statistical period, where the statistical period includes a plurality of measurement periods, the measurement periods correspond to the measurement rates one to one, and after the step of obtaining the rate information of the video stream sent by the single cell when detecting that multiple cameras transmit videos simultaneously in the single cell, the method further includes:
dividing the plurality of measurement rates with the average rate to obtain a plurality of rate ratios;
if at least one of the rate ratios is larger than a preset threshold value, judging that I-frame collision behavior exists;
and if the rate ratios are all smaller than or equal to a preset threshold value, judging that I-frame collision does not exist.
Optionally, the speed information includes a peak speed and a mean speed in a statistical period, and the step of performing collision analysis based on the speed information to obtain identification information of a camera in which an I-frame collision occurs includes:
determining the generation time of the peak value rate, and acquiring a plurality of bandwidth rates of the multi-path camera at the generation time;
comparing the bandwidth rates with the average rate respectively, determining a plurality of cameras with the bandwidth rates larger than the average rate, and acquiring the camera identification information of the cameras, wherein the cameras are cameras with I-frame collision in video transmission.
Optionally, the rate information includes a mean rate and a plurality of measurement rates in a statistical period, the statistical period includes a plurality of measurement periods, the measurement periods correspond to the measurement rates one to one, and the step of performing collision analysis based on the rate information to obtain collision time information of the I-frame collision includes:
dividing the plurality of measurement rates with the average rate to obtain a plurality of rate ratios;
comparing the rate ratios with a preset threshold respectively, and determining a plurality of measurement moments when the rate ratios are larger than the preset threshold;
and obtaining collision time information of the I frame collision based on the plurality of measuring moments.
Optionally, the video transmission method is applied to an edge computing MEC platform, and the step of performing I-frame timing adjustment on the multiple cameras based on the camera identification information and the collision time information includes:
and sending the camera identification information and the collision time information to a video platform so that the video platform can carry out I frame time sequence adjustment on the multiple paths of cameras based on the camera identification information and the collision time information.
Optionally, the video transmission method further includes:
acquiring cell information of the single cell;
and sending the cell information to a video platform so that the video platform can determine the access positions of the multiple paths of cameras based on the cell information.
Optionally, before the step of obtaining the rate information of the video stream sent by the single cell when detecting that multiple cameras simultaneously transmit videos in the single cell, the method further includes:
determining a video service of the same return address, and acquiring camera tag information and cell information for performing the video service;
and determining a corresponding single cell based on the cell information, and determining whether multiple paths of cameras exist in the single cell to transmit videos simultaneously based on the camera labeling information.
Further, to achieve the above object, the present invention also provides a video transmission apparatus including:
the acquisition module is used for acquiring the rate information of the video stream sent by the single cell when detecting that multiple paths of cameras simultaneously transmit videos under the single cell;
the analysis module is used for carrying out collision analysis based on the rate information if the I frame collision behavior of the key frame is detected based on the rate information to obtain the identification information of the camera with the I frame collision and the collision time information;
and the adjusting module is used for adjusting the I frame time sequence of the multi-path cameras based on the camera identification information and the collision time information.
Further, to achieve the above object, the present invention also provides a video transmission apparatus including: a memory, a processor and a video transmission program stored on the memory and executable on the processor, the video transmission program, when executed by the processor, implementing the steps of the video transmission method as described above.
Furthermore, to achieve the above object, the present invention also provides a computer program product comprising a computer program which, when executed by a processor, implements the steps of the video transmission method as described above.
Furthermore, to achieve the above object, the present invention also provides a computer readable storage medium having stored thereon a video transmission program which, when executed by a processor, implements the steps of the video transmission method as described above.
The invention provides a video transmission method, a device, equipment and a computer program product, when detecting that a plurality of cameras simultaneously transmit videos in a single cell, acquiring the rate information of a video stream sent by the single cell; if I frame collision behavior is detected to exist based on the speed information, performing collision analysis based on the speed information to obtain camera identification information and collision time information of I frame collision; and performing I-frame time sequence adjustment on the multiple cameras based on the camera identification information and the collision time information. By the mode, the video stream is analyzed to detect whether I-frame collision behaviors exist among the multiple paths of cameras, if the I-frame collision behaviors exist, I-frame time sequence adjustment is carried out on the cameras corresponding to the identification information of the cameras to avoid the I-frame collision behaviors of video transmission, so that the peak rate of the video stream is reduced, and the requirement of network transmission bandwidth is further reduced.
Drawings
Fig. 1 is a schematic terminal structure diagram of a hardware operating environment according to an embodiment of the present invention;
FIG. 2 is a flowchart illustrating a video transmission method according to a first embodiment of the present invention;
FIG. 3 is a schematic diagram of a video stream according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of rate calculations according to an embodiment of the present invention;
FIG. 5 is a flowchart illustrating a second embodiment of a video transmission method according to the present invention;
fig. 6 is a flowchart illustrating a video transmission method according to a third embodiment of the invention.
The implementation, functional features and advantages of the objects of the present invention will be further explained with reference to the accompanying drawings.
Detailed Description
It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
The main solution of the embodiment of the invention is as follows: when detecting that multiple cameras simultaneously transmit videos in a single cell, acquiring rate information of a video stream sent by the single cell; if I frame collision behavior is detected to exist based on the speed information, performing collision analysis based on the speed information to obtain camera identification information and collision time information of I frame collision; and performing I-frame time sequence adjustment on the multiple cameras based on the camera identification information and the collision time information. By the method, the video stream is analyzed to detect whether I frame collision behaviors exist among the multiple cameras, if the I frame collision behaviors exist, I frame time sequence adjustment is carried out on the camera corresponding to the camera identification information to avoid the I frame collision behaviors of video transmission, so that the peak rate of the video stream is reduced, and the requirement of network transmission bandwidth is further reduced.
It should be noted that after the requirement of network transmission bandwidth is reduced, the network construction optimization cost of an operator can be reduced, so that the video service carrying capacity under the unit wireless network bandwidth is improved, the requirement of a customer for purchasing a private line bandwidth is reduced, the customer experience is improved, and the industry video service is promoted to be better developed.
In addition, it should be noted that after I-frame collision avoidance is performed on multiple cameras in each cell, since the video return addresses of the cells are usually different, I-frame collision between the cells can also be avoided. Based on this, even if a plurality of cells are included under one base station, the I-frame collision behavior avoidance can be performed on the base station.
The technical terms related to the embodiment of the invention are as follows:
video stream, consisting of a series of encoded frames, the frame types of which include: i frame, P frame, B frame, wherein I frame is a key frame, i.e. a frame of picture is completely reserved. The P frame and the B frame are compressed based on the I frame, and only the picture difference data of the previous frame and the next frame exist. A group of pictures GOP of a video stream comprises an I frame and a plurality of P frames, wherein the data quantity of the I frame and the P frame possibly differs by tens of times, so that the I frame is transmitted by a plurality of cameras simultaneously, namely the I frame collision can cause the superposition of the transmission bandwidth requirements.
An I frame, also called ICP (Intra Coded Pictures), is a key frame, and is an independent frame with all information, and can be independently decoded without referring to other images, and can be simply understood as a static picture. The first frame in a video sequence is always an I-frame, and each Group of Pictures GOP (Group of Pictures) of a video stream starts with an I-frame and ends with the next I-frame.
P-frames, also called PCP (Predictive Coded Pictures), are Pictures generated by motion-compensated prediction with reference to the nearest previous I-picture or P-picture.
The MEC (Multi-access Edge Computing) Edge Computing brings brand-new characteristics of ultra-low time delay, ultra-large bandwidth and ultra-high safety by providing connection and Computing services at the Edge of a 5G network, and brings possibility for developing industrial video services under a mobile network, the MEC is a 5G evolution-based architecture and is a technology for deeply fusing mobile access networks and internet services, and Computing and processing capabilities are sunk to the Edge closest to the services to be completed.
A cell, also called a cell, refers to an area covered by a base station or a part of a base station (sector antenna) in a cellular mobile communication system, in which a mobile terminal can reliably communicate with the base station through a radio channel.
The DPI (Deep Packet Inspection) Deep Packet Inspection technology is characterized in that application protocol identification, packet content Inspection and Deep decoding of application layer data are added on the basis of the traditional IP Packet Inspection technology (Inspection and analysis of Packet elements contained between OSIs L2-L4). DPI technology can use three broad classes of detection means through the capture of raw packets of network traffic: feature value detection based on application data, identification detection based on application layer protocols, data detection based on behavior patterns. According to different detection methods, data possibly contained in the communication data packet are unpacked and analyzed one by one, and fine data changes existing in the macroscopic data flow are deeply dug out.
The IMSI (International Mobile Subscriber Identity) is used to distinguish different subscribers in a cellular network, and is a non-repetitive identifier in all cellular networks, so that the identifier has uniqueness.
In the embodiment of the invention, the I frame avoidance is realized by a mechanism at the camera side in the existing related scheme, namely when the camera is powered on or a video monitoring platform pulls a video stream from a camera, the camera randomly generates the generation time of the I frame, and the condition that all cameras transmit the I frame at the same time is avoided. However, for industrial generation and other scenes, the uplink bandwidth is still a bottleneck compared with a fixed network when multiple cameras are densely deployed. The method cannot adjust the I frame time sequence of the camera with the I frame collision behavior in the video transmission process.
In the prior art, the avoidance of video I frame collision only depends on random access at the terminal side to reduce the probability, and a plurality of problems still exist in a 5G network. Its main problems include 2 aspects: firstly, I frame collision is a problem of high frequency; and secondly, the I frame collision brings about great increase of the network peak transmission bandwidth, namely the average bandwidth is basically stable when the number of cameras is a certain number, but the I frame collision causes linear increase of the peak bandwidth.
Referring to fig. 1, fig. 1 is a schematic terminal structure diagram of a hardware operating environment according to an embodiment of the present invention.
The terminal in the embodiment of the present invention is a video transmission device, and the video transmission device may be a terminal device having a processing function, such as a PC (personal computer), a microcomputer, a notebook computer, and a server.
As shown in fig. 1, the terminal may include: a processor 1001, such as a CPU (Central Processing Unit), a communication bus 1002, a user interface 1003, a network interface 1004, and a memory 1005. Wherein a communication bus 1002 is used to enable connective communication between these components. The user interface 1003 may include a Display screen (Display), an input unit such as a Keyboard (Keyboard), and the optional user interface 1003 may also include a standard wired interface, a wireless interface. The network interface 1004 may optionally include a standard wired interface, a wireless interface (e.g., a WI-FI interface). The memory 1005 may be a high-speed RAM memory or a non-volatile memory such as a disk memory. The memory 1005 may alternatively be a storage device separate from the processor 1001.
Those skilled in the art will appreciate that the terminal structure shown in fig. 1 is not intended to be limiting and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components.
As shown in fig. 1, the memory 1005, which is a kind of computer storage medium, may include therein an operating system, a network communication module, a user interface module, and a video transmission program.
In the terminal shown in fig. 1, the processor 1001 may be configured to call a video transmission program stored in the memory 1005 and perform the following operations:
when detecting that multiple cameras simultaneously transmit videos in a single cell, acquiring rate information of a video stream sent by the single cell;
if the I frame collision behavior of the key frame is detected to exist based on the speed information, performing collision analysis based on the speed information to obtain identification information and collision time information of a camera in which I frame collision occurs;
and performing I frame time sequence adjustment on the multiple paths of cameras based on the camera identification information and the collision time information.
Further, the rate information includes a mean rate and a plurality of measurement rates within a statistical period, the statistical period includes a plurality of measurement periods, the plurality of measurement periods correspond to the plurality of measurement rates one by one, and the processor 1001 may be configured to invoke the video transmission program stored in the memory 1005, and further perform the following operations:
dividing the plurality of measurement rates with the average rate to obtain a plurality of rate ratios;
if at least one of the rate ratios is larger than a preset threshold value, judging that I-frame collision behavior exists;
and if the rate ratios are all smaller than or equal to a preset threshold, judging that I-frame collision does not exist.
Further, the rate information includes a peak rate and a mean rate in a statistical period, and the processor 1001 may be configured to call the video transmission program stored in the memory 1005, and further perform the following operations:
determining the generation time of the peak rate, and acquiring a plurality of bandwidth rates of the multi-path camera at the generation time;
comparing the bandwidth rates with the average rate respectively, determining a plurality of cameras with the bandwidth rates larger than the average rate, and acquiring the camera identification information of the cameras, wherein the cameras are cameras with I-frame collision in video transmission.
Further, the rate information includes a mean rate and a plurality of measurement rates within a statistical period, the statistical period includes a plurality of measurement periods, the plurality of measurement periods corresponds to the plurality of measurement rates one to one, and the processor 1001 may be configured to invoke a video transmission program stored in the memory 1005, and further perform the following operations:
dividing the plurality of measurement rates with the average rate to obtain a plurality of rate ratios;
comparing the speed ratios with a preset threshold respectively, and determining a plurality of measuring moments when the speed ratios are larger than the preset threshold;
and obtaining collision time information of the I frame collision based on the plurality of measuring moments.
Further, the processor 1001 may be configured to invoke a video transmission program stored in the memory 1005, and further perform the following operations:
and sending the camera identification information and the collision time information to a video platform so that the video platform can carry out I frame time sequence adjustment on the multiple paths of cameras based on the camera identification information and the collision time information.
Further, the processor 1001 may be configured to invoke a video transmission program stored in the memory 1005, and further perform the following operations:
acquiring cell information of the single cell;
and sending the cell information to a video platform so that the video platform can determine the access positions of the multiple paths of cameras based on the cell information.
Further, the processor 1001 may be configured to invoke a video transmission program stored in the memory 1005, and further perform the following operations:
determining a video service of the same return address, and acquiring camera tag information and cell information for performing the video service;
and determining a corresponding single cell based on the cell information, and determining whether multiple paths of cameras exist in the single cell to transmit videos simultaneously based on the camera labeling information.
Based on the hardware structure, various embodiments of the video transmission method of the present invention are proposed.
The invention provides a video transmission method.
Referring to fig. 2, fig. 2 is a flowchart illustrating a video transmission method according to a first embodiment of the present invention.
In this embodiment, the video transmission method includes:
step S10, when detecting that a plurality of paths of cameras transmit videos simultaneously in a single cell, acquiring rate information of a video stream sent by the single cell;
in the present embodiment, the video transmission method can be applied to a video transmission apparatus, which can be a terminal apparatus having a processing function, such as a PC (personal computer), a microcomputer, a notebook computer, a server, or the like. The video transmission method can also be applied to an MEC (Multi-access Edge Computing) platform, and the MEC platform can be a 5G Edge Computing platform so as to perform collision detection and avoidance on an I frame in a video return process of a camera under a 5G network scene through the 5G Edge Computing platform. It is also applicable to a video transmission system constituted by the MEC platform and other terminal devices (e.g., a video platform and a camera, etc.), which is subordinate to the above-described video transmission device of fig. 1. In this embodiment, an MEC platform is taken as an example of an execution subject.
In an embodiment, before the step S10, the video transmission method further includes:
acquiring the transmission flow of a video platform; and analyzing the transmission flow through Deep Packet Inspection (DPI) to obtain characteristic data of the transmission flow so as to perform I frame collision inspection and I frame collision avoidance based on the characteristic data subsequently. The video platform is used for displaying videos, storing the videos, analyzing the videos and the like.
The format of the transmission traffic may be preset, and specifically, the transmission traffic may be set through the MEC platform mirror image or the subscription video. Wherein the characteristic data may comprise at least one of: subscriber identity, service identity, network type, access location, traffic, duration, rate, etc. The user identification comprises at least one of: number, IMSI (International Mobile Subscriber Identity), client IP, etc. The service identification at least comprises one of the following: URL (uniform resource locator), destination IP, destination port number, etc. The network type includes at least a Radio Access Technology (RAT). The access location includes at least one of: TAC (Terminal Access Controller), CI (Cell Identity), and the like. The flow rate includes at least one of: single stream byte count, single application accumulated byte count, single rate accumulated byte count, etc. The duration includes at least one of: stream start time, stream end time, stream duration. The rates include at least one of: average uplink, average downlink, maximum uplink, maximum downlink, etc.
In this embodiment, when it is detected that multiple cameras simultaneously transmit videos in a single cell, rate information of a video stream sent by the single cell is obtained. The single cell is a single cell, the cell is a cellular cell, and when multiple paths of cameras in one cell simultaneously transmit videos wirelessly, I-frame collision may occur.
It should be noted that the video stream sent by the single cell is a video stream obtained by superimposing each code stream of the multiple cameras, specifically, refer to fig. 3, where fig. 3 is a schematic view of the video stream according to the embodiment of the present invention, where the number of the multiple cameras is N, the multiple camera code streams of the multiple cameras are superimposed to obtain peak superimposition of the code stream of the camera in the cell, at this time, the video stream sent by the single cell is just as shown in fig. 3, and the peak rate of the video stream exceeds the uplink peak of the cell, that is, an I-frame collision occurs.
The rate information includes a mean rate, a measurement rate, a peak rate, and the like. Specifically, the video stream may be split into several statistical periods, where one statistical period includes several measurement periods, one statistical period may be set to 1 second, and one measurement period may be set to 20 milliseconds, and then one statistical period includes 50 measurement periods. Of course, the measurement period and the statistical period may be set to other values according to actual needs, for example, the measurement period is less than or equal to 20 milliseconds, so as to ensure that the I frame of the main stream video transmission scene of 50 frames/second or less can be accurately identified in each statistical period. One measurement cycle corresponds to one measurement rate, and the calculation formula of the measurement rate is as follows:
measurement rate = number of bytes in measurement period/measurement period;
specifically, referring to fig. 4, fig. 4 is a schematic diagram of rate calculation according to an embodiment of the present invention, and calculating an instantaneous rate by counting the number of bytes in a measurement period is to calculate a measured rate, where a data packet may include a plurality of bytes, for example, a data packet includes 1400 bytes.
In one embodiment, the average rate may be an average rate over a statistical period, i.e., the average rate is an average of several measurement rates over a statistical period. The peak rate may be the peak rate over a statistical period, i.e. the maximum measurement rate over a statistical period.
Further, before the step S10, the video transmission method further includes:
step A40, determining a video service of the same return address, and acquiring camera marking information and cell information for performing the video service;
firstly, determining the video service of the same return address, and acquiring the label information of a camera and cell information for performing the video service. Specifically, a return address sent by a video platform is obtained, video services needing I-frame collision detection are determined based on the return address, then video services with the same return address are respectively determined from the video services needing I-frame collision detection, and camera identification information and cell information for performing the video services are obtained.
Wherein, the return address at least comprises one of the following: a destination IP address or URL, etc. The camera tag information includes at least one of: terminal IP, number, IMSI of the camera, etc. The camera tag information may include 1 tag information or a plurality of tag information, and it can be understood that the camera tag information includes one tag information indicating that there is no multi-path camera for transmitting video simultaneously in a single cell, and the camera tag information includes a plurality of tag information indicating that there is multi-path camera for transmitting video simultaneously in a single cell. The cell information includes at least one of: TAC and CI.
It should be noted that the camera performing the video service is the currently started camera and is transmitting the video back to the camera transmitting the back address. That is to say, compared with the detection of all cameras in a cell, the detection of multiple cameras in the cell that perform the same video service further has the advantages that the probability of the I-frame collision occurring at the same return address is higher, and the probability of the I-frame collision occurring at different return addresses is lower, so that the I-frame collision behavior can be detected more accurately by the multiple cameras that are accurate to the same video service.
And A50, determining a corresponding single cell based on the cell information, and determining whether multiple cameras simultaneously transmit videos in the single cell based on the camera labeling information.
And then, determining a corresponding single cell based on the cell information, and determining whether multiple paths of cameras exist under the single cell to simultaneously transmit videos based on the camera label information. Specifically, a corresponding single cell is determined based on cell information, so as to count whether multiple cameras simultaneously transmit videos according to cell convergence, that is, the number of the cameras is determined based on the camera tag information, and whether multiple paths of cameras simultaneously transmit videos is determined based on the number of the cameras.
In an embodiment, the step of determining whether multiple cameras exist in the single cell to transmit video simultaneously based on the camera tag information includes:
determining the number of cameras based on the camera labeling information; if the number of the cameras is larger than a preset number threshold, judging that multiple paths of cameras simultaneously transmit videos exist in the single cell; and if the number of the cameras is less than or equal to a preset number threshold, judging that a plurality of paths of cameras do not exist in the single cell and transmit videos at the same time. The preset number threshold may be set according to actual needs, for example, 1 or 2.
In another embodiment, the step of determining whether multiple cameras exist in the single cell to transmit video simultaneously based on the camera tag information comprises:
determining the number of cameras based on the camera labeling information; if the number of the cameras is larger than 1, judging that a plurality of paths of cameras simultaneously transmit videos exist in the single cell; and if the number of the cameras is 1 or 0, judging that no multi-path cameras transmit videos simultaneously under the single cell.
In some embodiments, when it is detected that multiple cameras simultaneously transmit a video in a single cell, the cell information and the camera tag information are recorded, so that the cell information and the camera tag information (which may also be camera identification information) may be directly obtained when performing I-frame collision detection and I-frame collision avoidance subsequently. For example, a cell may record a CI number and a camera may record a terminal IP address.
Step S20, if the I frame collision behavior of the key frame is detected based on the speed information, performing collision analysis based on the speed information to obtain the identification information of the camera with I frame collision and the collision time information;
in this embodiment, if it is detected that there is a key frame I frame collision behavior based on the rate information, performing collision analysis based on the rate information to obtain camera identification information and collision time information of an I frame collision. Specifically, the rate information includes a peak rate, a mean rate and a measurement rate, and based on the measurement rate and the mean rate, collision time information of the occurrence of the I-frame collision is determined; and determining the camera identification information of the I-frame collision based on the peak rate and the measurement rate. The mean rate is the average of several measurement rates. The peak rate is the maximum measurement rate.
It should be noted that the calculation formula of the measurement rate is as follows:
measurement rate = number of bytes in measurement period/measurement period;
specifically, referring to fig. 4, calculating an instantaneous rate by measuring the number of bytes in a period is to calculate a measured rate, wherein a packet may include a plurality of bytes, for example, a packet includes 1400 bytes.
Wherein, camera identification information includes at least: terminal IP, number, IMSI, etc. of the camera. The collision time information comprises several measurement instants at which an I-frame collision occurs.
In an embodiment, the rate information includes a mean rate and a plurality of measurement rates in a statistical period, the statistical period includes a plurality of measurement periods, and the measurement periods correspond to the measurement rates one to one, and after the step S10, the video transmission method further includes:
step A60, dividing the plurality of measurement rates with the average rate to obtain a plurality of rate ratios;
step A70, if at least one of the rate ratios is larger than a preset threshold value, judging that I-frame collision behavior exists;
and step A80, if the rate ratios are all smaller than or equal to a preset threshold, judging that I-frame collision does not exist.
In this embodiment, a plurality of measurement rates are divided by the average rate to obtain a plurality of rate ratios, and then the rate ratios are compared with a preset threshold, if at least one of the rate ratios is greater than the preset threshold, it is determined that there is an I-frame collision behavior; and if the rate ratios are all smaller than or equal to the preset threshold, judging that I-frame collision does not exist. The preset threshold may be set according to actual needs, for example, 1.5, and is not limited herein.
It should be noted that the video stream may be split into several statistical periods, where one statistical period includes several measurement periods, one statistical period may be set to 1 second, one measurement period may be set to 20 milliseconds, and then one statistical period includes 50 measurement periods. Of course, the measurement period and the statistical period may be set to other values according to actual needs, for example, the measurement period is less than or equal to 20 milliseconds, so as to ensure that the I frames of the main stream video transmission scene at 50 frames/second or less can be accurately identified in each statistical period. One measurement period corresponds to one measurement rate, and the calculation formula of the measurement rate is as follows:
measurement rate = number of bytes in measurement period/measurement period;
specifically, referring to fig. 4, calculating an instantaneous rate by measuring the number of bytes in a period is to calculate a measured rate, wherein a packet may include a plurality of bytes, for example, a packet includes 1400 bytes.
In one embodiment, the average rate may be an average rate over a statistical period, i.e., the average rate is an average of several measurement rates over a statistical period. The peak rate may be the peak rate within a statistical period, i.e. the maximum measured rate within a statistical period.
In a specific embodiment, after the step a70, the video transmission method further includes:
determining a plurality of measuring moments with I-frame collision behaviors, and obtaining collision time information of I-frame collision based on the measuring moments.
In another embodiment, the rate information includes a mean rate and a plurality of measurement rates within a statistical period, the statistical period includes a plurality of measurement periods, and the measurement periods correspond to the measurement rates one to one, and after the step S10, the video transmission method further includes:
comparing the plurality of measurement rates with the average rate respectively to obtain comparison results; and judging whether I frame collision behaviors exist or not based on the comparison result. It can be understood that comparing the plurality of measurement rates with the average rate may be the comparison method described in the previous embodiment, or may be the comparison method in other calculation manners, and the principles thereof are basically the same, and are not described in detail here.
And step S30, performing I-frame time sequence adjustment on the multiple paths of cameras based on the camera identification information and the collision time information.
In this embodiment, I-frame timing adjustment is performed for multiple cameras based on the camera identification information and the collision time information. Specifically, the camera which needs to perform the I-frame timing adjustment in the multiple cameras (i.e. the camera which has an I-frame collision) is determined based on the camera identification information, and then the I-frame timing adjustment is performed on the camera which needs to perform the I-frame timing adjustment based on the collision time information, i.e. the sending time of the I-frame of the camera is adjusted.
It should be noted that the adjusting unit for performing I-frame timing adjustment on the multiple cameras is in units of milliseconds, and specifically, the adjusting unit may be set to 20 milliseconds, that is, the minimum adjusting period of I-frame timing adjustment is 20 milliseconds.
In an embodiment, the video transmission method is applied to an edge computing MEC platform, and the step S30 includes:
step A31, sending the camera identification information and the collision time information to a video platform, so that the video platform performs I-frame timing adjustment on the multiple paths of cameras based on the camera identification information and the collision time information.
In this embodiment, the camera identification information and the collision time information are sent to a video platform, so that the video platform performs I-frame timing adjustment on the multiple paths of cameras based on the camera identification information and the collision time information. Specifically, the MEC platform outputs the identification information of the collided camera and the collision time information of the collision to the video platform.
Wherein, camera identification information includes at least: terminal IP, number, IMSI, port information, etc. of the camera. The collision time information comprises several measurement instants at which an I-frame collision occurs.
It should be noted that, the step of the video platform performing I-frame timing adjustment on the multiple cameras based on the camera identification information and the collision time information includes:
and the video platform performs I frame time sequence adjustment on the multiple paths of cameras based on the camera identification information and the collision time information. Specifically, the camera which needs to perform the I-frame timing adjustment in the multiple paths of cameras (i.e. the camera which has an I-frame collision) is determined based on the camera identification information, and then the I-frame timing adjustment is performed on the camera which needs to perform the I-frame timing adjustment based on the collision time information, i.e. the sending time of the I-frame of the camera is adjusted.
Further, the video transmission method further includes:
step A90, obtaining the cell information of the single cell;
in this embodiment, the cell information of the single cell is acquired. Specifically, based on the recorded cell information, the cell information of the single cell is acquired from the database in which the cell information is recorded.
Wherein the cell information at least comprises one of the following: TAC, CI, etc.
Step A100, sending the cell information to a video platform, so that the video platform determines the access positions of the multiple paths of cameras based on the cell information.
In this embodiment, the cell information is sent to the video platform, so that the video platform determines the access positions of the multiple paths of cameras based on the cell information. Specifically, the MEC platform outputs the collided camera identification information, the collided collision time information and the cell information to the video platform, so that the video platform determines the access position of the multiple paths of cameras, and the I-frame time sequence adjustment is performed on the multiple paths of cameras based on the camera identification information and the collision time information.
It can be understood that the cell information is sent to the video platform, so that the video platform can be quickly positioned to multiple paths of cameras needing I frame time sequence adjustment.
The embodiment of the invention provides a video transmission method, which comprises the steps of acquiring rate information of a video stream sent by a single cell when detecting that multiple paths of cameras simultaneously transmit videos in the single cell; if I frame collision behavior is detected to exist based on the speed information, performing collision analysis based on the speed information to obtain camera identification information and collision time information of I frame collision; and performing I-frame time sequence adjustment on the multiple cameras based on the camera identification information and the collision time information. By the mode, the video stream is analyzed to detect whether I-frame collision behaviors exist among the multiple paths of cameras, if the I-frame collision behaviors exist, I-frame time sequence adjustment is carried out on the cameras corresponding to the identification information of the cameras to avoid the I-frame collision behaviors of video transmission, so that the peak rate of the video stream is reduced, and the requirement of network transmission bandwidth is further reduced.
Further, based on the above first embodiment, a second embodiment of the video transmission method of the present invention is proposed.
Referring to fig. 5, fig. 5 is a flowchart illustrating a video transmission method according to a second embodiment of the present invention.
In this embodiment, the rate information includes a peak rate and a mean rate in a statistical period, and in step S20, performing collision analysis based on the rate information to obtain the camera identification information of the I-frame collision, including:
step S21, determining the generation time of the peak value rate, and acquiring a plurality of bandwidth rates of the multi-path camera at the generation time;
in the present embodiment, the generation timing of the peak rate is determined, and a plurality of bandwidth rates of the multi-channel cameras at the generation timing are acquired. The bandwidth rate is the original rate of each camera, namely the rate before the camera code streams are not superposed.
And S22, comparing the bandwidth rates with the average rate respectively, determining a plurality of cameras with the bandwidth rates larger than the average rate, and acquiring the camera identification information of the cameras, wherein the cameras are cameras with I frame collision in video transmission.
In this embodiment, the plurality of bandwidth rates are respectively compared with the average rate, a plurality of cameras with bandwidth rates greater than the average rate are determined, and the camera identification information of the plurality of cameras is obtained, where the plurality of cameras are cameras with I-frame collision in video transmission. Specifically, according to the speed conditions of multiple cameras, a camera with a single-camera bandwidth speed greater than the statistical mean of the full number of cameras (the mean of the multiple cameras, that is, the mean speed) at the moment when the peak speed occurs is identified and determined as a collided camera, and the camera identification information of the collided camera is obtained, so that the camera identification information is output to the video platform.
The camera identification information at least comprises one of the following information: terminal IP, number, IMSI, port information, etc. of the camera.
In the embodiment, the generation time of the peak rate is determined, and a plurality of bandwidth rates of the multi-path camera at the generation time are obtained; comparing the bandwidth rates with the average rate respectively, determining a plurality of cameras with the bandwidth rates larger than the average rate, and acquiring the camera identification information of the cameras, wherein the cameras are cameras with I-frame collision during video transmission. According to the mode, the generation moment of the peak rate is determined firstly to determine the moment of the I-frame collision, then the bandwidth rates of the multiple paths of cameras are compared with the average value rate respectively to judge the camera with the I-frame collision, so that the camera with the I-frame collision is accurately judged, the accuracy of subsequent I-frame time sequence adjustment is improved, and the accuracy of I-frame collision avoidance is improved.
Further, based on the first embodiment described above, a third embodiment of the video transmission method of the present invention is proposed.
Referring to fig. 6, fig. 6 is a flowchart illustrating a video transmission method according to a third embodiment of the present invention.
In this embodiment, the rate information includes a mean rate and a plurality of measurement rates in a statistical period, the statistical period includes a plurality of measurement periods, the measurement periods correspond to the measurement rates one to one, and in the step S20, performing collision analysis based on the rate information to obtain collision time information of the I-frame collision, including:
step S23, performing division operation on the plurality of measurement rates and the average rate respectively to obtain a plurality of rate ratios;
step S24, comparing the rate ratios with a preset threshold respectively, and determining a plurality of measuring moments when the rate ratios are larger than the preset threshold;
and S25, obtaining collision time information of the I frame collision based on the plurality of measuring moments.
In this embodiment, first, a plurality of measurement rates are divided by the average rate to obtain a plurality of rate ratios, and then the rate ratios are compared with a preset threshold to determine a plurality of measurement times at which the rate ratios are greater than the preset threshold; and finally, obtaining collision time information of the I frame collision based on a plurality of measuring moments. The preset threshold may be set according to actual needs, for example, 1.5, which is not limited herein. The collision time information may include 1 or more time instants, i.e., may include the time of one I-frame collision and the time of a plurality of I-frame collisions.
It should be noted that the video stream may be split into several statistical periods, where one statistical period includes several measurement periods, one statistical period may be set to 1 second, one measurement period may be set to 20 milliseconds, and then one statistical period includes 50 measurement periods. Of course, the measurement period and the statistical period may be set to other values according to actual needs, for example, the measurement period is less than or equal to 20 milliseconds, so as to ensure that the I frame of the main stream video transmission scene of 50 frames/second or less can be accurately identified in each statistical period. One measurement period corresponds to one measurement rate, and the calculation formula of the measurement rate is as follows:
measurement rate = number of bytes in measurement period/measurement period;
specifically, referring to fig. 4, calculating an instantaneous rate by measuring the number of bytes in a period is to calculate a measured rate, wherein a packet may include a plurality of bytes, for example, a packet includes 1400 bytes.
In one embodiment, the average rate may be an average rate over a statistical period, i.e., the average rate is an average of several measurement rates over a statistical period. The peak rate may be the peak rate over a statistical period, i.e. the maximum measurement rate over a statistical period.
In this embodiment, a plurality of measurement rates are divided by the mean rate to obtain a plurality of rate ratios; comparing the speed ratios with a preset threshold value respectively, and determining a plurality of measuring moments when the speed ratios are larger than the preset threshold value; and obtaining collision time information of the I frame collision based on a plurality of measuring moments. Through the mode, the measurement rates are respectively compared with the average rate to judge the time of I-frame collision, so that the accuracy of subsequent I-frame time sequence adjustment is improved, and the accuracy of I-frame collision avoidance is improved.
The invention also provides a video transmission device.
In this embodiment, the video transmission apparatus includes:
the acquisition module is used for acquiring the rate information of the video stream sent by the single cell when detecting that multiple paths of cameras simultaneously transmit videos in the single cell;
the analysis module is used for carrying out collision analysis based on the rate information if the I frame collision behavior of the key frame is detected based on the rate information to obtain the identification information of the camera with the I frame collision and the collision time information;
and the adjusting module is used for carrying out I frame time sequence adjustment on the multiple paths of cameras based on the camera identification information and the collision time information.
Further, the rate information includes a mean rate and a plurality of measurement rates in a statistical period, the statistical period includes a plurality of measurement periods, and the measurement periods correspond to the measurement rates one to one, and the video transmission apparatus further includes:
the division operation module is used for carrying out division operation on the plurality of measurement rates and the average rate respectively to obtain a plurality of rate ratios;
the collision judgment module is used for judging that I frame collision behavior exists if at least one of the rate ratios is larger than a preset threshold;
and the collision judgment module is further used for judging that the I frame collision behavior does not exist if the rate ratios are all smaller than or equal to a preset threshold.
Further, the rate information includes a peak rate and a mean rate over a statistical period, and the analysis module includes:
the rate acquisition unit is used for determining the generation time of the peak rate and acquiring a plurality of bandwidth rates of the multi-path cameras at the generation time;
and the speed comparison unit is used for respectively comparing the bandwidth speeds with the average speed, determining a plurality of cameras with the bandwidth speeds larger than the average speed, and acquiring the camera identification information of the cameras, wherein the cameras are cameras with I frame collision during video transmission.
Further, the rate information includes a mean rate and a plurality of measurement rates in a statistical period, the statistical period includes a plurality of measurement periods, the measurement periods correspond to the measurement rates one to one, and the analysis module includes:
the division operation unit is used for carrying out division operation on the plurality of measurement rates and the average rate respectively to obtain a plurality of rate ratios;
the time determining unit is used for respectively comparing the rate ratios with a preset threshold value and determining a plurality of measuring times at which the rate ratios are greater than the preset threshold value;
and the time acquisition unit is used for acquiring collision time information of the I frame collision based on the plurality of measurement moments.
Further, the video transmission method is applied to an edge computing MEC platform, and the adjusting module comprises:
and the information sending unit is used for sending the camera identification information and the collision time information to a video platform so that the video platform can carry out I frame time sequence adjustment on the multiple paths of cameras based on the camera identification information and the collision time information.
Further, the adjustment module includes:
an information obtaining unit, configured to obtain cell information of the single cell;
and the information sending unit is also used for sending the cell information to a video platform so that the video platform can determine the access positions of the multiple paths of cameras based on the cell information.
Further, the video transmission apparatus further includes:
the information acquisition module is used for determining the video service of the same return address and acquiring the camera label information and the cell information for carrying out the video service;
and the camera determining module is used for determining a corresponding single cell based on the cell information and determining whether multiple cameras simultaneously transmit videos in the single cell based on the camera tag information.
The function implementation of each module in the video transmission apparatus corresponds to each step in the video transmission method embodiment, and the function and implementation process thereof are not described in detail herein.
The present invention also provides a computer-readable storage medium having stored thereon a video transmission program which, when executed by a processor, implements the steps of the video transmission method as described in any of the above embodiments.
The specific embodiment of the computer-readable storage medium of the present invention is substantially the same as the embodiments of the video transmission method, and is not repeated herein.
The invention also provides a computer program product comprising a computer program which, when executed by a processor, implements the steps of the video transmission method according to any one of the embodiments above.
The specific embodiment of the computer program product of the present invention is substantially the same as the embodiments of the video transmission method, and is not described herein again.
According to the scheme, the edge computing platform MEC platform is deployed, and the video I frame collision detection service, the video platform and the camera are deployed based on the MEC platform. The video I-frame collision detection service is a core network of the MEC platform, so that the I-frame collision service is analyzed and identified by utilizing flow provided by the edge computing platform, and a call service is provided for the video platform based on an Application Program Interface (API).
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or system that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or system. Without further limitation, an element defined by the phrases "comprising one of 8230; \8230;" 8230; "does not exclude the presence of additional like elements in a process, method, article, or system that comprises the element.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present invention or portions thereof contributing to the prior art may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) as described above and includes several instructions for enabling a terminal device (which may be a mobile phone, a computer, a server, or a network device) to execute the method according to the embodiments of the present invention.
The above description is only a preferred embodiment of the present invention, and not intended to limit the scope of the present invention, and all modifications of equivalent structures and equivalent processes, which are made by using the contents of the present specification and the accompanying drawings, or directly or indirectly applied to other related technical fields, are included in the scope of the present invention.

Claims (10)

1. A video transmission method, characterized in that it comprises the steps of:
when detecting that multiple cameras simultaneously transmit videos in a single cell, acquiring rate information of a video stream sent by the single cell;
if the I frame collision behavior of the key frame is detected to exist based on the speed information, performing collision analysis based on the speed information to obtain identification information and collision time information of a camera in which I frame collision occurs;
and performing I-frame time sequence adjustment on the multiple paths of cameras based on the camera identification information and the collision time information.
2. The video transmission method according to claim 1, wherein the rate information includes a mean rate and a plurality of measurement rates within a statistical period, the statistical period includes a plurality of measurement periods, the measurement periods correspond to the measurement rates one to one, and after the step of acquiring the rate information of the video stream transmitted by the single cell when detecting that multiple cameras transmit video simultaneously in the single cell, the method further includes:
dividing the plurality of measurement rates with the average rate to obtain a plurality of rate ratios;
if at least one of the rate ratios is larger than a preset threshold value, judging that I-frame collision behavior exists;
and if the rate ratios are all smaller than or equal to a preset threshold, judging that I-frame collision does not exist.
3. The video transmission method according to claim 1, wherein the rate information includes a peak rate and a mean rate in a statistical period, and the step of performing collision analysis based on the rate information to obtain camera identification information of the occurrence of the I-frame collision includes:
determining the generation time of the peak value rate, and acquiring a plurality of bandwidth rates of the multi-path camera at the generation time;
comparing the bandwidth rates with the average rate respectively, determining a plurality of cameras with the bandwidth rates larger than the average rate, and acquiring the camera identification information of the cameras, wherein the cameras are cameras with I-frame collision in video transmission.
4. The video transmission method according to claim 1, wherein the rate information includes a mean rate and a plurality of measurement rates within a statistical period, the statistical period includes a plurality of measurement periods, the measurement periods correspond to the measurement rates one to one, and the step of performing collision analysis based on the rate information to obtain collision time information of the I-frame collision includes:
dividing the plurality of measurement rates with the average rate to obtain a plurality of rate ratios;
comparing the speed ratios with a preset threshold respectively, and determining a plurality of measuring moments when the speed ratios are larger than the preset threshold;
and obtaining collision time information of the I frame collision based on the plurality of measuring moments.
5. The video transmission method according to claim 1, wherein the video transmission method is applied to an edge computing MEC platform, and the step of performing I-frame timing adjustment on the multiple cameras based on the camera identification information and the collision time information comprises:
and sending the camera identification information and the collision time information to a video platform so that the video platform can carry out I frame time sequence adjustment on the multiple paths of cameras based on the camera identification information and the collision time information.
6. The video transmission method according to claim 5, wherein the video transmission method further comprises:
acquiring cell information of the single cell;
and sending the cell information to a video platform so that the video platform can determine the access positions of the multiple paths of cameras based on the cell information.
7. The video transmission method according to any one of claims 1 to 6, wherein before the step of acquiring the rate information of the video stream transmitted by the single cell when detecting that multiple cameras simultaneously transmit video in the single cell, the method further comprises:
determining video services of the same return address, and acquiring camera marking information and cell information for performing the video services;
and determining a corresponding single cell based on the cell information, and determining whether multiple paths of cameras exist in the single cell to transmit videos simultaneously based on the camera labeling information.
8. A video transmission apparatus, characterized in that the video transmission apparatus comprises:
the acquisition module is used for acquiring the rate information of the video stream sent by the single cell when detecting that multiple paths of cameras simultaneously transmit videos under the single cell;
the analysis module is used for carrying out collision analysis based on the rate information if the I frame collision behavior of the key frame is detected based on the rate information to obtain the identification information of the camera with the I frame collision and the collision time information;
and the adjusting module is used for adjusting the I frame time sequence of the multi-path cameras based on the camera identification information and the collision time information.
9. A video transmission device, characterized in that the video transmission device comprises: memory, processor and a video transmission program stored on the memory and executable on the processor, the video transmission program when executed by the processor implementing the steps of the video transmission method according to any one of claims 1 to 7.
10. A computer program product, characterized in that it comprises a computer program which, when being executed by a processor, carries out the steps of the video transmission method according to any one of claims 1 to 7.
CN202110816052.6A 2021-07-19 2021-07-19 Video transmission method, device, equipment and computer program product Pending CN115643366A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110816052.6A CN115643366A (en) 2021-07-19 2021-07-19 Video transmission method, device, equipment and computer program product

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110816052.6A CN115643366A (en) 2021-07-19 2021-07-19 Video transmission method, device, equipment and computer program product

Publications (1)

Publication Number Publication Date
CN115643366A true CN115643366A (en) 2023-01-24

Family

ID=84939432

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110816052.6A Pending CN115643366A (en) 2021-07-19 2021-07-19 Video transmission method, device, equipment and computer program product

Country Status (1)

Country Link
CN (1) CN115643366A (en)

Similar Documents

Publication Publication Date Title
US10516881B2 (en) Method, device, and system for testing video quality
EP2244426B1 (en) A method and system for evaluating users quality of experience and network device
JP5302342B2 (en) Method, apparatus and system for evaluating the quality of a video code stream
EP2482558B1 (en) Method, apparatus and system for evaluation of video transmission quality
US10820229B2 (en) Method for providing streaming service and apparatus therefor
CN112261353B (en) Video monitoring and shunting method, system and computer readable storage medium
EP2523145A1 (en) Method for dynamically adapting video image parameters for facilitating subsequent applications
US9942553B2 (en) Communication system, method and program
CN112311564A (en) Training method, device and system applying MOS model
CN107438254B (en) Service identification method, device and system based on user behavior
CN104283699A (en) Method and device for determining service types
EP3491784B1 (en) Estimation of losses in a video stream
CN114520891A (en) Data transmission method based on multiple front-end video devices and related device
EP2996283B1 (en) Systems and devices for determining key performance indicators using inferential statistics
JP6033058B2 (en) Communication path identification device
CN115643366A (en) Video transmission method, device, equipment and computer program product
CN115567975B (en) Data message processing method and device, electronic equipment and storage medium
CN109921993B (en) Data transmission method of communication system and communication system
US20230020974A1 (en) Service Monitoring Method, Apparatus, and System
CN113473125B (en) Code rate control method, equipment, storage medium and product
CN113132684B (en) Data transmission method, electronic equipment and storage medium
Song QoE Assessment Models
WO2024027577A1 (en) Data feature analysis method and apparatus, and network device
WO2014141785A1 (en) Monitoring system and monitoring method for network
CN115734246A (en) Cutoff optimization method and device for terminal equipment and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination