CN116016458B - Method and device for realizing audio and video interaction of webpage end based on webrtc - Google Patents

Method and device for realizing audio and video interaction of webpage end based on webrtc Download PDF

Info

Publication number
CN116016458B
CN116016458B CN202310310836.0A CN202310310836A CN116016458B CN 116016458 B CN116016458 B CN 116016458B CN 202310310836 A CN202310310836 A CN 202310310836A CN 116016458 B CN116016458 B CN 116016458B
Authority
CN
China
Prior art keywords
audio
video
video set
interaction
webpage
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310310836.0A
Other languages
Chinese (zh)
Other versions
CN116016458A (en
Inventor
刘校锋
汪新朝
秦梓林
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sichuan Shutong Information Technology Co ltd
Original Assignee
Sichuan Shutong Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sichuan Shutong Information Technology Co ltd filed Critical Sichuan Shutong Information Technology Co ltd
Priority to CN202310310836.0A priority Critical patent/CN116016458B/en
Publication of CN116016458A publication Critical patent/CN116016458A/en
Application granted granted Critical
Publication of CN116016458B publication Critical patent/CN116016458B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The invention relates to the technical field of audio and video transmission, in particular to an audio and video interaction method and device for realizing a webpage end based on webrtc, wherein the method comprises the following steps: receiving an interaction instruction of radio and video, determining the number of web page ends participating in interaction according to the interaction instruction, starting an interaction server if the number of web page ends is greater than or equal to a preset web page end threshold value, receiving a client audio and video set generated by a starting web page end through webrtc, sending the client audio and video set to a channel where the interaction server is located to perform transmission, calculating a cache packet loss rate, adjusting a transmission strategy of the starting server for next transmission according to the cache packet loss rate, determining to adopt a direct transmission mode if the number of web page ends is less than the web page end threshold value, receiving a direct transmission audio and video set generated by the starting web page end through webrtc under the direct transmission mode, and performing code rate fixing operation on the direct transmission audio and video set to obtain a code rate audio and video set. The invention can improve the interaction quality of audio and video interaction.

Description

Method and device for realizing audio and video interaction of webpage end based on webrtc
Technical Field
The invention relates to the technical field of audio and video transmission, in particular to an audio and video interaction method and device for realizing a webpage end based on webrtc.
Background
The webtc is a technology for realizing real-time communication directly based on the webpage end without downloading software, even if audio and video communication software is mostly developed by relying on the webtc, the webtc can realize point-to-point communication session between the webpage ends, and the technology has extremely important effect.
The current commonly used audio and video interaction method based on webrtc for realizing the webpage end mainly depends on an interaction server, and because the webpage end has weak analysis capability (including encoding, decoding, uploading, downloading and the like) on the audio and video, the audio and video interaction needs to be assisted by the interaction server.
Although the implementation of audio and video interaction under webrtc is feasible based on the interaction server, when the interaction server is used, the actual processing capacity of each webpage end to the audio and video, such as the cache packet loss rate, the encoding compression processing of each webpage end to the audio and video, and the like, are not considered, so that the problem that the interaction quality of the audio and video interaction is to be improved is solved.
Disclosure of Invention
The invention provides an audio and video interaction method and device for realizing a webpage end based on webrtc, which mainly aim at improving the interaction quality of audio and video interaction.
In order to achieve the above purpose, the audio/video interaction method for realizing the web page based on webrtc provided by the invention comprises the following steps:
Receiving an interaction instruction of audio and video, and determining the number of webpage ends participating in interaction according to the interaction instruction, wherein each webpage end can log in webrtc, the webpage end generating the audio and video is called a starting webpage end, and the webpage end receiving the audio and video is called a finishing webpage end;
if the number of the webpage ends is greater than or equal to a preset webpage end threshold value, starting an interaction server, wherein the interaction server is used for transmitting audio and video information between the webpage ends;
receiving a client audio-video set generated by a starting point webpage end through webrtc, wherein the client audio-video set is to be transmitted to a final point webpage end, and performing compression processing on the client audio-video set to obtain a compressed audio-video set;
transmitting the compressed audio and video set to a channel where the interactive server is located until the compressed audio and video set is transmitted to a terminal webpage end, and calculating a cache packet loss rate of the compressed audio and video set;
adjusting a transmission strategy of next transmission of the starting point server according to the buffer packet loss rate until the audio/video interaction is completed;
if the number of the webpage ends is smaller than the webpage end threshold value, an interaction server is not started, and it is determined that the audio and video information between the webpage ends adopts a direct transmission mode;
in a direct transmission mode, receiving a direct transmission audio-video set generated by a starting point webpage end through webrtc, and executing code rate fixing operation on the direct transmission audio-video set to obtain a code rate audio-video set;
Directly transmitting the code rate audio and video set to the terminal webpage end to complete audio and video interaction;
and the channel where the compressed audio and video set is sent to the interaction server is transmitted, and the method further comprises the following steps:
obtaining average time consumption and average flow consumption value of adding redundant data packet, coding and compressing each time;
calculating to obtain the flow consumption rate value of the compressed audio-video set according to the average time consumption and the average flow consumption value;
and calculating the flow consumption rate value of the compressed audio-video set according to the average time consumption and the average flow consumption value, wherein the flow consumption rate value comprises the following steps:
and calculating the flow consumption rate value of each time of adding redundant data packets, encoding and compressing according to the average time consumption and the average flow consumption value, wherein the calculation method is as follows:
Figure SMS_1
wherein v is l Representing the value of the flow consumption rate, t, of performing the addition of redundant data packets, encoding or compression l Representing the average time taken to perform the operations of adding redundant packets, encoding or compressing, F l Representing an average traffic consumption value when performing the addition of the redundant data packet, encoding or compression operation;
solving the average value of the flow consumption rate values added with redundant data packets, codes and compression to obtain the flow consumption rate value of the compressed audio-video set;
The calculating the cache packet loss rate of the compressed audio and video set comprises the following steps:
and calculating to obtain the cache packet loss rate according to the following steps:
Figure SMS_2
wherein P represents the buffer packet loss rate when the compressed audio and video set is transmitted to the terminal webpage end to perform buffer memory, N represents the total packet number of the compressed audio and video set, eta (v) represents the packet loss rate of each compressed audio and video set transmitted to the terminal webpage end on the premise of generating the flow consumption rate value v of the compressed audio and video set, f (tau) represents a random probability function, wherein the random probability function can use a Gaussian probability function, and tau represents the signal strength value of the interaction server and the terminal webpage end when the compressed audio and video set is transmitted each time;
the adjusting the transmission strategy of the next transmission of the starting point server according to the buffer packet loss rate comprises the following steps:
and when the starting point server transmits next time, constructing a flow consumption rate value of the compressed audio and video set according to the following calculation:
Figure SMS_3
wherein v is s The flow consumption rate value provided by the compressed audio and video set is constructed when the starting point server transmits next time, v is the flow consumption rate value provided by the compressed audio and video set is constructed when the starting point server transmits last time, P is the buffer packet loss rate corresponding to v, and delta is a preset weight factor.
Optionally, the performing compression processing on the audio-video set of the user to obtain a compressed audio-video set includes:
acquiring the data volume of a user audio-video set, and executing segmentation operation on the user audio-video set according to the data volume to obtain a plurality of audio-video diversity;
adding redundant data packets to each audio and video diversity, and executing coding operation after the redundant data packets are added successfully to obtain a plurality of audio and video coding sets;
performing compression operation on each audio-video coding set based on a compression algorithm to obtain compressed audio-video diversity;
and recombining each compressed audio-video diversity according to the sequence of the audio-video diversity to obtain the compressed audio-video set.
Optionally, the executing a slicing operation on the audio and video set of the user according to the data volume to obtain a plurality of audio and video diversity includes:
the number of audio-video diversity obtained after the slicing operation is performed is calculated according to the following formula:
Figure SMS_4
wherein Q represents the number of the audio-video diversity, ω represents a weight factor for calculating the number of the audio-video diversity, p represents the data amount of the audio-video set of the user, and μ and σ are both adjustment factors.
Optionally, the performing a code rate fixing operation on the direct transmission audio-video set to obtain a code rate audio-video set includes:
Acquiring software and hardware parameters of all webpage ends participating in audio and video interaction;
extracting weakest software and hardware parameters from the software and hardware parameters of all the webpage ends;
inputting the weakest software and hardware parameters into a code rate classifier to execute code rate classification to obtain a code rate classification result, wherein the code rate classification result comprises A, B and C three levels, the A level code rate value is the highest, and the C level code rate value is the lowest;
and according to the code rate classification result, performing code rate fixing operation on the direct transmission audio-video set during uploading to obtain the code rate audio-video set.
Optionally, the code rate classifier includes an SVM, a random forest, or XGBOOST.
In order to solve the above problems, the present invention further provides an audio/video interaction device for implementing a web page based on webrtc, where the device includes:
the webpage end determining module is used for receiving interaction instructions of the audio and video, determining the number of webpage ends participating in interaction according to the interaction instructions, wherein each webpage end can log in webrtc, the webpage end generating the audio and video is called a starting webpage end, and the webpage end receiving the audio and video is called an ending webpage end;
the audio/video compression module is used for starting an interaction server if the number of the webpage ends is greater than or equal to a preset webpage end threshold value, wherein the interaction server is used for transmitting audio/video information between the webpage ends, receiving a client audio/video set generated by a starting webpage end through webrtc, wherein the client audio/video set is to be transmitted to a final webpage end, and performing compression processing on the client audio/video set to obtain a compressed audio/video set;
The buffer packet loss rate calculation module is used for performing transmission on a channel where the compressed audio and video set is sent to the interaction server until the compressed audio and video set is transmitted to the terminal webpage end, and calculating the buffer packet loss rate of the compressed audio and video set;
the direct transmission mode determining module is used for adjusting a transmission strategy of the next transmission of the starting point server according to the cache packet loss rate until the audio and video interaction is completed, if the number of the webpage ends is smaller than the webpage end threshold value, the interaction server is not started, and it is determined that the audio and video information between the webpage ends adopts a direct transmission mode;
the code rate fixing module is used for receiving a direct transmission audio and video set generated by the starting point webpage end through webrtc in a direct transmission mode, performing code rate fixing operation on the direct transmission audio and video set to obtain a code rate audio and video set, and directly transmitting the code rate audio and video set to the ending point webpage end to complete audio and video interaction;
and the channel where the compressed audio and video set is sent to the interaction server is transmitted, and the method further comprises the following steps:
obtaining average time consumption and average flow consumption value of adding redundant data packet, coding and compressing each time;
calculating to obtain the flow consumption rate value of the compressed audio-video set according to the average time consumption and the average flow consumption value;
And calculating the flow consumption rate value of the compressed audio-video set according to the average time consumption and the average flow consumption value, wherein the flow consumption rate value comprises the following steps:
and calculating the flow consumption rate value of each time of adding redundant data packets, encoding and compressing according to the average time consumption and the average flow consumption value, wherein the calculation method is as follows:
Figure SMS_5
wherein v is l Representing the value of the flow consumption rate, t, of performing the addition of redundant data packets, encoding or compression l Representing the average time taken to perform the operations of adding redundant packets, encoding or compressing, F l Representing an average traffic consumption value when performing the addition of the redundant data packet, encoding or compression operation;
solving the average value of the flow consumption rate values added with redundant data packets, codes and compression to obtain the flow consumption rate value of the compressed audio-video set;
the calculating the cache packet loss rate of the compressed audio and video set comprises the following steps:
and calculating to obtain the cache packet loss rate according to the following steps:
Figure SMS_6
wherein P represents the buffer packet loss rate when the compressed audio and video set is transmitted to the terminal webpage end to perform buffer memory, N represents the total packet number of the compressed audio and video set, eta (v) represents the packet loss rate of each compressed audio and video set transmitted to the terminal webpage end on the premise of generating the flow consumption rate value v of the compressed audio and video set, f (tau) represents a random probability function, wherein the random probability function can use a Gaussian probability function, and tau represents the signal strength value of the interaction server and the terminal webpage end when the compressed audio and video set is transmitted each time;
The adjusting the transmission strategy of the next transmission of the starting point server according to the buffer packet loss rate comprises the following steps:
and when the starting point server transmits next time, constructing a flow consumption rate value of the compressed audio and video set according to the following calculation:
Figure SMS_7
wherein v is s The flow consumption rate value provided by the compressed audio and video set is constructed when the starting point server transmits next time, v is the flow consumption rate value provided by the compressed audio and video set is constructed when the starting point server transmits last time, P is the buffer packet loss rate corresponding to v, and delta is a preset weight factor.
In order to solve the above-mentioned problems, the present invention also provides an electronic apparatus including:
a memory storing at least one instruction; a kind of electronic device with high-pressure air-conditioning system
And the processor executes the instructions stored in the memory to realize the method for realizing the audio/video interaction of the webpage end based on webrtc.
In order to solve the above problems, the present invention further provides a computer readable storage medium, where at least one instruction is stored, where the at least one instruction is executed by a processor in an electronic device to implement the above method for implementing web-side audio/video interaction based on webtc.
In order to solve the problems described in the background art, firstly, an interactive instruction of an audio/video is received, the number of web page ends participating in interaction is determined according to the interactive instruction, each web page end can log in webrtc, the web page end generating the audio/video is called a starting point web page end, the web page end receiving the audio/video is called an ending point web page end, further, the embodiment of the invention does not directly use an interactive server to respond to the interactive instruction so as to realize the audio/video interaction, but firstly, the interactive server is started, if the number of the web page ends is greater than or equal to a preset web page end threshold, then the interactive server is started, then the starting point web page end receiving the client audio/video set generated by webrtc carries out compression processing on the client audio/video set, so as to obtain a compressed audio/video set, the compressed audio/video set is transmitted on a channel where the compressed audio/video set is sent to the interactive server until the ending point web page end is transmitted, and what needs to be emphasized is that when the transmission of the compressed audio/video set is completed, meanwhile, the packet loss rate of the compressed audio/video set is calculated, and the audio/video set is transmitted according to the transmission quality of the current transmission policy is adjusted, and the audio/video transmission quality of the audio/video set is transmitted according to the transmission policy of the buffer. Further, if the number of the web page ends is smaller than the threshold value of the web page ends, it indicates that the number of people participating in the audio-video conference is not large, so that the number and the scale of the audio-video to be uploaded and downloaded by each web page end are relatively small, so that in order to save resources, an interactive server is not started, the audio-video information between the web page ends is determined to be in a direct transmission form, further, in the direct transmission form, a direct transmission audio-video set generated by the starting web page end through webrtc is received, and code rate fixing operation is performed on the direct transmission audio-video set, so that the code rate audio-video set is obtained, and it is to be explained that the code rate fixing operation is to prevent unstable signals or weak software and hardware capability of the web page ends, so that accidents occur in the transmission process of the direct transmission audio-video set, and therefore, it is understandable that the code rate fixing operation is generally fixed to be small. Therefore, the audio and video interaction method, the device, the electronic equipment and the computer readable storage medium for realizing the webpage end based on the webtc can improve the interaction quality of audio and video interaction.
Drawings
FIG. 1 is a flow chart of an audio/video interaction method for realizing a web page based on webtc according to an embodiment of the present invention;
FIG. 2 is a functional block diagram of an audio/video interaction device for implementing a web page based on webtc according to an embodiment of the present invention;
fig. 3 is a schematic structural diagram of an electronic device for implementing the webtc-based audio/video interaction method according to an embodiment of the present invention.
In the figure, 1-an electronic device; 10-a processor; 11-memory; 12-bus; 100-realizing an audio-video interaction device of a webpage end based on webrtc; 101-a webpage end determining module; 102-an audio and video compression module; 103-a cache packet loss rate calculation module; 104, a direct transmission mode determining module; 105-code rate fixing module.
The achievement of the objects, functional features and advantages of the present invention will be further described with reference to the accompanying drawings, in conjunction with the embodiments.
Detailed Description
It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
The embodiment of the application provides an audio/video interaction method for realizing a webpage end based on webrtc. The execution main body of the webtc-based webpage-side audio/video interaction method comprises at least one of a server side, a terminal and the like, which can be configured to execute the method provided by the embodiment of the application. In other words, the audio/video interaction method based on webtc for realizing the webpage end can be executed by software or hardware installed in a terminal device or a server device, and the software can be a blockchain platform. The service end includes but is not limited to: a single server, a server cluster, a cloud server or a cloud server cluster, and the like.
Referring to fig. 1, a flow chart of a webtc-based audio/video interaction method for implementing a web page according to an embodiment of the present invention is shown. In this embodiment, the audio/video interaction method for implementing a web page based on webtc includes:
s1, receiving an interaction instruction of an audio and video, and determining the number of web page ends participating in interaction according to the interaction instruction, wherein each web page end can log in webrtc, the web page end generating the audio and video is called a starting point web page end, and the web page end receiving the audio and video is called an ending point web page end.
In the embodiment of the invention, the interactive instruction of the audio and video can be sent by a user at one of the webpage ends. Illustratively, the sheetlet is taken as a company manager to want to hold multiparty conferences with other colleagues outsiders, so that an audio-video interaction instruction is initiated.
In addition, webrtc is a technology for directly realizing real-time communication based on a webpage end, and the technology realizes video call through the webpage in a browser without downloading a client, so that great convenience is brought to users. And it should be understood that most of the audio and video communication software is developed based on the webrtc technology at present, so that the communication session between the point to point and the browser is realized by the webrtc, and the method has extremely important roles.
It will be appreciated that when executing a multiparty conference, multiple web page ends are involved, such as a sheetlet corresponding to one web page end, and multiple colleagues may also be involved, so that the web page end transmitting the small Zhang Yin video is referred to as the start web page end, and the web page end receiving the small Zhang Yin video is referred to as the end web page end.
S2, if the number of the webpage ends is larger than or equal to a preset webpage end threshold value, starting an interaction server, wherein the interaction server is used for transmitting audio and video information between the webpage ends.
It should be explained that, in order to improve the audio/video transmission quality and efficiency between the web page ends, the embodiments of the present invention determine the number of web page ends participating in the audio/video conference first, if the number of web page ends is greater than or equal to the preset web page end threshold value, the audio/video information generated by each web page end is transmitted as a bridge segment by the interaction server, and if the number of web page ends is less than the web page end threshold value, the interaction server is removed, so as to realize the direct transmission of the audio/video information between the web page ends.
And S3, receiving a client audio and video set generated by the starting point webpage end through webrtc, wherein the client audio and video set is transmitted to the ending point webpage end, and performing compression processing on the client audio and video set to obtain a compressed audio and video set.
Illustratively, the distributing work starts immediately after the sheetlet initiates the multiparty conference, thus generating an audio-video set of the sheetlet dispatch work. Further, the compressing the audio and video set of the user to obtain a compressed audio and video set includes:
acquiring the data volume of a user audio-video set, and executing segmentation operation on the user audio-video set according to the data volume to obtain a plurality of audio-video diversity;
adding redundant data packets to each audio and video diversity, and executing coding operation after the redundant data packets are added successfully to obtain a plurality of audio and video coding sets;
performing compression operation on each audio-video coding set based on a compression algorithm to obtain compressed audio-video diversity;
and recombining each compressed audio-video diversity according to the sequence of the audio-video diversity to obtain the compressed audio-video set.
It should be understood that the larger the data size, the more number of slicing is required to be performed. For example, if the standard definition mode is only turned on when the small sheets are in the multiparty conference, the small Zhang Yin video set generated in one second is only 0.1M, so the amount of audio-video diversity generated after the segmentation is possible to be performed is small, and if the small Zhang Kaiqi super definition mode is used, the small Zhang Yin video set generated in one second is possible to reach 1M, so the amount of audio-video diversity generated after the segmentation is performed is also large. In detail, the performing a slicing operation on the audio and video set of the user according to the data volume to obtain a plurality of audio and video diversity includes:
The number of audio-video diversity obtained after the slicing operation is performed is calculated according to the following formula:
Figure SMS_8
wherein Q represents the number of the audio-video diversity, ω represents a weight factor for calculating the number of the audio-video diversity, p represents the data amount of the audio-video set of the user, and μ and σ are both adjustment factors.
It should be explained that in the embodiment of the present invention, the values of μ and σ are 1 and 2, respectively.
In the embodiment of the present invention, adding redundant data packets, encoding and compressing are all conventional technical means, and will not be described herein.
And S4, transmitting the compressed audio and video set to a channel where the interactive server is located until the compressed audio and video set is transmitted to a terminal webpage end, and calculating the cache packet loss rate of the compressed audio and video set.
It can be understood that the redundant data packet is added, the traffic is consumed during encoding and compression, and reasonable traffic supply can improve the efficiency of encoding, compression and other operations, so that the compressed audio and video set is ensured to be timely sent to the channel where the interaction server is located.
On the other hand, if, after the compressed audio and video set is sent to the channel for transmission, the compressed audio and video set is sent to the destination web page end to perform buffering due to the influence of the software and hardware capability of the destination web page end or the response capability of the interaction server, and the like, the excessive packet loss rate occurs, and then the operations of adding redundant data packets, encoding, compressing and the like are performed at the start web page end without supplying excessive traffic, so that the cooperation of the two web page ends is ensured, and resource waste is avoided, therefore, in detail, the method for transmitting the compressed audio and video set to the channel where the interaction server is sent to perform transmission further comprises:
Obtaining average time consumption and average flow consumption value of adding redundant data packet, coding and compressing each time;
and calculating the flow consumption rate value of the compressed audio and video set according to the average time consumption and the average flow consumption value.
Further, the calculating, according to the average time consumption and the average flow consumption value, the flow consumption rate value of the compressed audio-video set includes:
and calculating the flow consumption rate value of each time of adding redundant data packets, encoding and compressing according to the average time consumption and the average flow consumption value, wherein the calculation method is as follows:
Figure SMS_9
wherein v is l Representing the value of the flow consumption rate, t, of performing the addition of redundant data packets, encoding or compression l Representing the average time taken to perform the operations of adding redundant packets, encoding or compressing, F l Representing an average traffic consumption value when performing the addition of the redundant data packet, encoding or compression operation;
and (3) solving the average value of the flow consumption rate values added with the redundant data packet, coded and compressed to obtain the flow consumption rate value of the compressed audio and video set.
It can be appreciated that the flow consumption rate value of the compressed audio-video set can be obtained by adding and dividing the flow consumption rate value of the execution of adding redundant data packets, encoding or compression by three.
It should be emphasized that the embodiments of the present invention only enumerate three sets of operations of adding redundant data packets, encoding or compressing, and in other embodiments, may further include an audio/video quality improving operation, a denoising operation, etc., so it is understood that the calculation method of the corresponding generated flow consumption rate value is determined according to the generation step of the compressed audio/video set.
Further, the calculating the buffer packet loss rate of the compressed audio and video set includes:
Figure SMS_10
wherein P represents a buffer packet loss rate when the compressed audio and video set is transmitted to the terminal webpage end to perform buffer storage, N represents the total packet number of the compressed audio and video set, eta (v) represents the packet loss number of each compressed audio and video set transmitted to the terminal webpage end on the premise of generating a flow consumption rate value v of the compressed audio and video set, f (tau) represents a random probability function, wherein the random probability function can use a Gaussian probability function, and tau represents the signal strength value of the interaction server and the terminal webpage end when the compressed audio and video set is transmitted each time.
According to the above description, after each compressed audio and video set is transmitted to the end web page end, the buffer packet loss rate of the compressed audio and video set at the end web page end needs to be calculated, so that the transmission strategy of the end web page end for executing the transmission next time is adjusted according to the buffer packet loss rate, wherein the transmission strategy mainly comprises the flow supply strategy for generating the compressed audio and video set next time.
And S5, adjusting a transmission strategy of next transmission of the starting point server according to the cache packet loss rate until the audio/video interaction is completed.
In detail, the adjusting the transmission policy of the next transmission of the origin server according to the buffered packet loss rate includes:
and when the starting point server transmits next time, constructing a flow consumption rate value of the compressed audio and video set according to the following calculation:
Figure SMS_11
wherein v is s The flow consumption rate value provided by the compressed audio and video set is constructed when the starting point server transmits next time, v is the flow consumption rate value provided by the compressed audio and video set is constructed when the starting point server transmits last time, P is the buffer packet loss rate corresponding to v, and delta is a preset weight factor.
It can be understood that, because the transmission process of the compressed audio and video set is continuous or even long, according to the above description, it can be known that the flow consumption rate value consumed when the compressed audio and video set is transmitted this time is all closely related to the buffer packet loss rate at the last time, when the buffer packet loss rate of the end point web page end is higher at the last time, the flow consumption rate value provided by the start point web page end at the present time is correspondingly reduced, so as to reduce the excessive resource consumption problem, and when the buffer packet loss rate of the end point web page end is lower at the last time, the receiving capability of the end point web page end on the compressed audio and video set is strong (represented in multiple aspects of decoding capability, storage capability and the like), and then the flow consumption rate value provided by the start point web page end at the present time is correspondingly increased, so as to improve the interaction effect between the start point web page end and the end point web page end.
S6, if the number of the webpage ends is smaller than the webpage end threshold value, the interactive server is not started, and it is determined that the audio and video information between the webpage ends adopts a direct transmission mode;
it should be understood that when the number of web page ends is smaller than the threshold value of web page ends, the number of people participating in the audio-video conference is not large. In the management conference initiated by the small-sized sheet, only 2 persons are dispatched outside, 3 persons are added to the small-sized sheet, and the threshold value of the webpage end is 4 persons, so that the conference is the small-sized conference, and therefore, in order to save resources, an interaction server is not started, namely, the webpage end where the small-sized sheet is located and audio and video information generated by the webpage ends of other 2 persons are directly transmitted to each other, and the direct transmission mode is achieved without coordination of the interaction server.
S7, under the direct transmission mode, receiving a direct transmission audio-video set generated by a starting point webpage end through webrtc, and executing code rate fixing operation on the direct transmission audio-video set to obtain a code rate audio-video set;
however, it should be understood that the advantage of the direct transmission form is that the interactive server is not required to forward, but because each web page end needs to directly upload the audio and video to other web page ends and simultaneously receive the audio and video uploaded by other web page ends, the uplink bandwidth consumption is relatively large, and in addition, the video encoding and decoding are all at the web page ends, and the consumption of the CPU of the web page ends is relatively large. Therefore, in order to ensure the normal running of the audio-video conference, the code rate needs to be directly determined.
In detail, the performing the code rate fixing operation on the direct transmission audio-video set to obtain a code rate audio-video set includes:
acquiring software and hardware parameters of all webpage ends participating in audio and video interaction;
extracting weakest software and hardware parameters from the software and hardware parameters of all the webpage ends;
inputting the weakest software and hardware parameters into a code rate classifier to execute code rate classification to obtain a code rate classification result, wherein the code rate classification result comprises A, B and C three levels, the A level code rate value is the highest, and the C level code rate value is the lowest;
and according to the code rate classification result, performing code rate fixing operation on the direct transmission audio-video set during uploading to obtain the code rate audio-video set.
It is understood that the software and hardware parameters include software parameters and hardware parameters, wherein the software parameters include a signal strength value, an operation bit number of an operating system, a version of the operating system, and the like; the hardware parameters include CPU power, GPU model, power, circuit board model, etc.
According to the barrel efficiency, in the multiparty audio-video conference, if the software and hardware parameters of one webpage end lag to cause the receiving audio-video to be blocked or the uploading audio-video to be blocked, the conference effect of the whole multiparty audio-video conference is affected.
And S8, directly transmitting the code rate audio and video set to the terminal webpage end to complete audio and video interaction.
In order to solve the problems described in the background art, firstly, an interactive instruction of an audio/video is received, the number of web page ends participating in interaction is determined according to the interactive instruction, each web page end can log in webrtc, the web page end generating the audio/video is called a starting point web page end, the web page end receiving the audio/video is called an ending point web page end, further, the embodiment of the invention does not directly use an interactive server to respond to the interactive instruction so as to realize the audio/video interaction, but firstly, the interactive server is started, if the number of the web page ends is greater than or equal to a preset web page end threshold, then the interactive server is started, then the starting point web page end receiving the client audio/video set generated by webrtc carries out compression processing on the client audio/video set, so as to obtain a compressed audio/video set, the compressed audio/video set is transmitted on a channel where the compressed audio/video set is sent to the interactive server until the ending point web page end is transmitted, and what needs to be emphasized is that when the transmission of the compressed audio/video set is completed, the buffer packet loss rate of the compressed audio/video set is calculated, and the buffer packet loss rate of the compressed audio/video set is calculated at the same time, and the audio/video packet loss rate of the compressed audio/video set can be reflected when the transmission of the compressed audio/video set is transmitted according to the current transmission quality of the audio/video set, and the audio/video transmission quality is adjusted according to the transmission quality of the transmission policy. Further, if the number of the web page ends is smaller than the threshold value of the web page ends, it indicates that the number of people participating in the audio-video conference is not large, so that the number and the scale of the audio-video to be uploaded and downloaded by each web page end are relatively small, so that in order to save resources, an interactive server is not started, the audio-video information between the web page ends is determined to be in a direct transmission form, further, in the direct transmission form, a direct transmission audio-video set generated by the starting web page end through webrtc is received, and code rate fixing operation is performed on the direct transmission audio-video set, so that the code rate audio-video set is obtained, and it is to be explained that the code rate fixing operation is to prevent unstable signals or weak software and hardware capability of the web page ends, so that accidents occur in the transmission process of the direct transmission audio-video set, and therefore, it is understandable that the code rate fixing operation is generally fixed to be small. Therefore, the audio and video interaction method, the device, the electronic equipment and the computer readable storage medium for realizing the webpage end based on the webtc can improve the interaction quality of audio and video interaction.
Fig. 2 is a functional block diagram of an audio/video interaction device based on webtc implementation of a web page according to an embodiment of the present invention.
The audio/video interaction device 100 for realizing the web page based on the webtc can be installed in electronic equipment. According to the implemented functions, the webtc-based audio/video interaction device 100 for implementing a web page end may include a web page end determining module 101, an audio/video compressing module 102, a buffer packet loss rate calculating module 103, a direct transmission mode determining module 104, and a code rate fixing module 105. The module of the invention, which may also be referred to as a unit, refers to a series of computer program segments, which are stored in the memory of the electronic device, capable of being executed by the processor of the electronic device and of performing a fixed function.
The web page end determining module 101 is configured to receive an interaction instruction of an audio and a video, determine the number of web page ends participating in interaction according to the interaction instruction, wherein each web page end can log in webrtc, the web page end generating the audio and the video is called a starting point web page end, and the web page end receiving the audio and the video is called an ending point web page end;
the audio/video compression module 102 is configured to start an interaction server if the number of web page ends is greater than or equal to a preset web page end threshold, where the interaction server is configured to transmit audio/video information between the web page ends, receive a client audio/video set generated by a starting web page end through webrtc, where the client audio/video set is to be transmitted to a final web page end, and perform compression processing on the client audio/video set to obtain a compressed audio/video set;
The buffer packet loss rate calculation module 103 is configured to perform transmission on a channel where the compressed audio and video set is sent to the interaction server until the compressed audio and video set is transmitted to the end point web page end, and calculate a buffer packet loss rate of the compressed audio and video set;
the direct transmission mode determining module 104 is configured to adjust a transmission policy of next transmission by the starting point server according to the buffer packet loss rate until the audio/video interaction is completed, and if the number of web page ends is less than the web page end threshold value, not start the interaction server, and determine that the audio/video information between the web page ends adopts a direct transmission form;
the code rate fixing module 105 is configured to receive a direct-transmission audio/video set generated by the origin web page end through webrtc in a direct-transmission mode, perform a code rate fixing operation on the direct-transmission audio/video set to obtain a code rate audio/video set, and directly transmit the code rate audio/video set to the end web page end to complete audio/video interaction.
In detail, the modules in the webtc-based webpage-end audio/video interaction device 100 in the embodiment of the present invention use the same technical means as the webtc-based webpage-end audio/video interaction method described in fig. 1, and can produce the same technical effects, which are not described herein.
Fig. 3 is a schematic structural diagram of an electronic device for implementing an audio/video interaction method based on webrtc for implementing a web page according to an embodiment of the present invention.
The electronic device 1 may include a processor 10, a memory 11 and a bus 12, and may further include a computer program stored in the memory 11 and executable on the processor 10, such as an audio/video interaction method program for implementing a web page based on webtc.
The memory 11 includes at least one type of readable storage medium, including flash memory, a mobile hard disk, a multimedia card, a card memory (e.g., SD or DX memory, etc.), a magnetic memory, a magnetic disk, an optical disk, etc. The memory 11 may in some embodiments be an internal storage unit of the electronic device 1, such as a removable hard disk of the electronic device 1. The memory 11 may in other embodiments also be an external storage device of the electronic device 1, such as a plug-in mobile hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card) or the like, which are provided on the electronic device 1. Further, the memory 11 may also include both an internal storage unit and an external storage device of the electronic device 1. The memory 11 may be used to store not only application software installed in the electronic device 1 and various data, such as codes of an audio/video interaction method program for implementing a web page based on webtc, but also temporarily store data that has been output or is to be output.
The processor 10 may be comprised of integrated circuits in some embodiments, for example, a single packaged integrated circuit, or may be comprised of multiple integrated circuits packaged with the same or different functions, including one or more central processing units (Central Processing unit, CPU), microprocessors, digital processing chips, graphics processors, combinations of various control chips, and the like. The processor 10 is a Control Unit (Control Unit) of the electronic device, connects various components of the entire electronic device using various interfaces and lines, executes or executes programs or modules stored in the memory 11 (for example, a web-side audio/video interaction method program based on webtc, etc.), and invokes data stored in the memory 11 to perform various functions of the electronic device 1 and process the data.
The bus 12 may be a peripheral component interconnect standard (peripheral component interconnect, PCI) bus, or an extended industry standard architecture (extended industry standard architecture, EISA) bus, among others. The bus 12 may be divided into an address bus, a data bus, a control bus, etc. The bus 12 is arranged to enable a connection communication between the memory 11 and at least one processor 10 etc.
Fig. 3 shows only an electronic device with components, it being understood by a person skilled in the art that the structure shown in fig. 3 does not constitute a limitation of the electronic device 1, and may comprise fewer or more components than shown, or may combine certain components, or may be arranged in different components.
For example, although not shown, the electronic device 1 may further include a power source (such as a battery) for supplying power to each component, and preferably, the power source may be logically connected to the at least one processor 10 through a power management device, so that functions of charge management, discharge management, power consumption management, and the like are implemented through the power management device. The power supply may also include one or more of any of a direct current or alternating current power supply, recharging device, power failure detection circuit, power converter or inverter, power status indicator, etc. The electronic device 1 may further include various sensors, bluetooth modules, wi-Fi modules, etc., which will not be described herein.
Further, the electronic device 1 may also comprise a network interface, optionally the network interface may comprise a wired interface and/or a wireless interface (e.g. WI-FI interface, bluetooth interface, etc.), typically used for establishing a communication connection between the electronic device 1 and other electronic devices.
The electronic device 1 may optionally further comprise a user interface, which may be a Display, an input unit, such as a Keyboard (Keyboard), or a standard wired interface, a wireless interface. Alternatively, in some embodiments, the display may be an LED display, a liquid crystal display, a touch-sensitive liquid crystal display, an OLED (Organic Light-Emitting Diode) touch, or the like. The display may also be referred to as a display screen or display unit, as appropriate, for displaying information processed in the electronic device 1 and for displaying a visual user interface.
It should be understood that the embodiments described are for illustrative purposes only and are not limited to this configuration in the scope of the patent application.
The audio/video interaction method program stored in the memory 11 of the electronic device 1 and implemented on the basis of webtc is a combination of a plurality of instructions, and when running in the processor 10, the method may be implemented:
receiving an interaction instruction of audio and video, and determining the number of webpage ends participating in interaction according to the interaction instruction, wherein each webpage end can log in webrtc, the webpage end generating the audio and video is called a starting webpage end, and the webpage end receiving the audio and video is called a finishing webpage end;
If the number of the webpage ends is greater than or equal to a preset webpage end threshold value, starting an interaction server, wherein the interaction server is used for transmitting audio and video information between the webpage ends;
receiving a client audio-video set generated by a starting point webpage end through webrtc, wherein the client audio-video set is to be transmitted to a final point webpage end, and performing compression processing on the client audio-video set to obtain a compressed audio-video set;
transmitting the compressed audio and video set to a channel where the interactive server is located until the compressed audio and video set is transmitted to a terminal webpage end, and calculating a cache packet loss rate of the compressed audio and video set;
adjusting a transmission strategy of next transmission of the starting point server according to the buffer packet loss rate until the audio/video interaction is completed;
if the number of the webpage ends is smaller than the webpage end threshold value, an interaction server is not started, and it is determined that the audio and video information between the webpage ends adopts a direct transmission mode;
in a direct transmission mode, receiving a direct transmission audio-video set generated by a starting point webpage end through webrtc, and executing code rate fixing operation on the direct transmission audio-video set to obtain a code rate audio-video set;
and directly transmitting the code rate audio and video set to the terminal webpage end to complete audio and video interaction.
Specifically, the specific implementation method of the above instructions by the processor 10 may refer to descriptions of related steps in the corresponding embodiments of fig. 1 to 3, which are not repeated herein.
Further, the modules/units integrated in the electronic device 1 may be stored in a computer readable storage medium if implemented in the form of software functional units and sold or used as separate products. The computer readable storage medium may be volatile or nonvolatile. For example, the computer readable medium may include: any entity or device capable of carrying the computer program code, a recording medium, a U disk, a removable hard disk, a magnetic disk, an optical disk, a computer Memory, a Read-Only Memory (ROM).
The present invention also provides a computer readable storage medium storing a computer program which, when executed by a processor of an electronic device, can implement:
receiving an interaction instruction of audio and video, and determining the number of webpage ends participating in interaction according to the interaction instruction, wherein each webpage end can log in webrtc, the webpage end generating the audio and video is called a starting webpage end, and the webpage end receiving the audio and video is called a finishing webpage end;
If the number of the webpage ends is greater than or equal to a preset webpage end threshold value, starting an interaction server, wherein the interaction server is used for transmitting audio and video information between the webpage ends;
receiving a client audio-video set generated by a starting point webpage end through webrtc, wherein the client audio-video set is to be transmitted to a final point webpage end, and performing compression processing on the client audio-video set to obtain a compressed audio-video set;
transmitting the compressed audio and video set to a channel where the interactive server is located until the compressed audio and video set is transmitted to a terminal webpage end, and calculating a cache packet loss rate of the compressed audio and video set;
adjusting a transmission strategy of next transmission of the starting point server according to the buffer packet loss rate until the audio/video interaction is completed;
if the number of the webpage ends is smaller than the webpage end threshold value, an interaction server is not started, and it is determined that the audio and video information between the webpage ends adopts a direct transmission mode;
in a direct transmission mode, receiving a direct transmission audio-video set generated by a starting point webpage end through webrtc, and executing code rate fixing operation on the direct transmission audio-video set to obtain a code rate audio-video set;
and directly transmitting the code rate audio and video set to the terminal webpage end to complete audio and video interaction.
In the several embodiments provided in the present invention, it should be understood that the disclosed apparatus, device and method may be implemented in other manners. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the modules is merely a logical function division, and there may be other manners of division when actually implemented.
In addition, each functional module in the embodiments of the present invention may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units can be realized in a form of hardware or a form of hardware and a form of software functional modules.
It will be evident to those skilled in the art that the invention is not limited to the details of the foregoing illustrative embodiments, and that the present invention may be embodied in other specific forms without departing from the spirit or essential characteristics thereof.
Furthermore, it is evident that the word "comprising" does not exclude other elements or steps, and that the singular does not exclude a plurality. A plurality of units or means recited in the present application may also be implemented by one unit or means by software or hardware. The terms second, etc. are used to denote a name, but not any particular order.
Finally, it should be noted that the above-mentioned embodiments are merely for illustrating the technical solution of the present invention and not for limiting the same, and although the present invention has been described in detail with reference to the preferred embodiments, it should be understood by those skilled in the art that modifications and equivalents may be made to the technical solution of the present invention without departing from the spirit and scope of the technical solution of the present invention.

Claims (6)

1. An audio and video interaction method for realizing a webpage end based on webrtc is characterized by comprising the following steps:
receiving an interaction instruction of audio and video, and determining the number of webpage ends participating in interaction according to the interaction instruction, wherein each webpage end can log in webrtc, the webpage end generating the audio and video is called a starting webpage end, and the webpage end receiving the audio and video is called a finishing webpage end;
if the number of the webpage ends is greater than or equal to a preset webpage end threshold value, starting an interaction server, wherein the interaction server is used for transmitting audio and video information between the webpage ends;
receiving a client audio-video set generated by a starting point webpage end through webrtc, wherein the client audio-video set is to be transmitted to a final point webpage end, and performing compression processing on the client audio-video set to obtain a compressed audio-video set;
transmitting the compressed audio and video set to a channel where the interactive server is located until the compressed audio and video set is transmitted to a terminal webpage end, and calculating a cache packet loss rate of the compressed audio and video set;
Adjusting a transmission strategy of next transmission of the starting point server according to the buffer packet loss rate until the audio/video interaction is completed;
if the number of the webpage ends is smaller than the webpage end threshold value, an interaction server is not started, and it is determined that the audio and video information between the webpage ends adopts a direct transmission mode;
in a direct transmission mode, receiving a direct transmission audio-video set generated by a starting point webpage end through webrtc, and executing code rate fixing operation on the direct transmission audio-video set to obtain a code rate audio-video set;
directly transmitting the code rate audio and video set to the terminal webpage end to complete audio and video interaction;
and the channel where the compressed audio and video set is sent to the interaction server is transmitted, and the method further comprises the following steps:
obtaining average time consumption and average flow consumption value of adding redundant data packet, coding and compressing each time;
calculating to obtain the flow consumption rate value of the compressed audio-video set according to the average time consumption and the average flow consumption value;
and calculating the flow consumption rate value of the compressed audio-video set according to the average time consumption and the average flow consumption value, wherein the flow consumption rate value comprises the following steps:
and calculating the flow consumption rate value of each time of adding redundant data packets, encoding and compressing according to the average time consumption and the average flow consumption value, wherein the calculation method is as follows:
Figure QLYQS_1
Wherein v is l Representing the value of the flow consumption rate, t, of performing the addition of redundant data packets, encoding or compression l Representing the average time taken to perform the operations of adding redundant packets, encoding or compressing, F l Representing an average traffic consumption value when performing the addition of the redundant data packet, encoding or compression operation;
solving the average value of the flow consumption rate values added with redundant data packets, codes and compression to obtain the flow consumption rate value of the compressed audio-video set;
the calculating the cache packet loss rate of the compressed audio and video set comprises the following steps:
and calculating to obtain the cache packet loss rate according to the following steps:
Figure QLYQS_2
wherein P represents the buffer packet loss rate when the compressed audio and video set is transmitted to the terminal webpage end to perform buffer memory, N represents the total packet number of the compressed audio and video set, eta (v) represents the packet loss rate of each compressed audio and video set transmitted to the terminal webpage end on the premise of generating the flow consumption rate value v of the compressed audio and video set, f (tau) represents a random probability function, wherein the random probability function can use a Gaussian probability function, and tau represents the signal strength value of the interaction server and the terminal webpage end when the compressed audio and video set is transmitted each time;
the adjusting the transmission strategy of the next transmission of the starting point server according to the buffer packet loss rate comprises the following steps:
And when the starting point server transmits next time, constructing a flow consumption rate value of the compressed audio and video set according to the following calculation:
Figure QLYQS_3
wherein v is s The flow consumption rate value provided by the compressed audio and video set is constructed when the starting point server transmits next time, v is the flow consumption rate value provided by the compressed audio and video set is constructed when the starting point server transmits last time, P is the buffer packet loss rate corresponding to v, and delta is a preset weight factor.
2. The method for implementing web page-side audio/video interaction based on webtc according to claim 1, wherein the step of performing compression processing on the user audio/video set to obtain a compressed audio/video set includes:
acquiring the data volume of a user audio-video set, and executing segmentation operation on the user audio-video set according to the data volume to obtain a plurality of audio-video diversity;
adding redundant data packets to each audio and video diversity, and executing coding operation after the redundant data packets are added successfully to obtain a plurality of audio and video coding sets;
performing compression operation on each audio-video coding set based on a compression algorithm to obtain compressed audio-video diversity;
and recombining each compressed audio-video diversity according to the sequence of the audio-video diversity to obtain the compressed audio-video set.
3. The method for implementing web page-side audio/video interaction based on webtc according to claim 2, wherein said performing a slicing operation on the user audio/video set according to the data amount to obtain a plurality of audio/video diversity includes:
the number of audio-video diversity obtained after the slicing operation is performed is calculated according to the following formula:
Figure QLYQS_4
wherein Q represents the number of the audio-video diversity, ω represents a weight factor for calculating the number of the audio-video diversity, p represents the data amount of the audio-video set of the user, and μ and σ are both adjustment factors.
4. The method for implementing web page-side audio/video interaction based on webtc according to claim 1, wherein said performing a code rate fixing operation on the direct transmission audio/video set to obtain a code rate audio/video set includes:
acquiring software and hardware parameters of all webpage ends participating in audio and video interaction;
extracting weakest software and hardware parameters from the software and hardware parameters of all the webpage ends;
inputting the weakest software and hardware parameters into a code rate classifier to execute code rate classification to obtain a code rate classification result, wherein the code rate classification result comprises A, B and C three levels, the A level code rate value is the highest, and the C level code rate value is the lowest;
and according to the code rate classification result, performing code rate fixing operation on the direct transmission audio-video set during uploading to obtain the code rate audio-video set.
5. The method for realizing audio-video interaction at the web page end based on webtc according to claim 4, wherein the code rate classifier comprises an SVM, a random forest or XGBOOST.
6. An audio and video interaction device for realizing a webpage end based on webrtc is characterized by comprising:
the webpage end determining module is used for receiving interaction instructions of the audio and video, determining the number of webpage ends participating in interaction according to the interaction instructions, wherein each webpage end can log in webrtc, the webpage end generating the audio and video is called a starting webpage end, and the webpage end receiving the audio and video is called an ending webpage end;
the audio/video compression module is used for starting an interaction server if the number of the webpage ends is greater than or equal to a preset webpage end threshold value, wherein the interaction server is used for transmitting audio/video information between the webpage ends, receiving a client audio/video set generated by a starting webpage end through webrtc, wherein the client audio/video set is to be transmitted to a final webpage end, and performing compression processing on the client audio/video set to obtain a compressed audio/video set;
the buffer packet loss rate calculation module is used for performing transmission on a channel where the compressed audio and video set is sent to the interaction server until the compressed audio and video set is transmitted to the terminal webpage end, and calculating the buffer packet loss rate of the compressed audio and video set;
The direct transmission mode determining module is used for adjusting a transmission strategy of the next transmission of the starting point server according to the cache packet loss rate until the audio and video interaction is completed, if the number of the webpage ends is smaller than the webpage end threshold value, the interaction server is not started, and it is determined that the audio and video information between the webpage ends adopts a direct transmission mode;
the code rate fixing module is used for receiving a direct transmission audio and video set generated by the starting point webpage end through webrtc in a direct transmission mode, performing code rate fixing operation on the direct transmission audio and video set to obtain a code rate audio and video set, and directly transmitting the code rate audio and video set to the ending point webpage end to complete audio and video interaction;
and the channel where the compressed audio and video set is sent to the interaction server is transmitted, and the method further comprises the following steps:
obtaining average time consumption and average flow consumption value of adding redundant data packet, coding and compressing each time;
calculating to obtain the flow consumption rate value of the compressed audio-video set according to the average time consumption and the average flow consumption value;
and calculating the flow consumption rate value of the compressed audio-video set according to the average time consumption and the average flow consumption value, wherein the flow consumption rate value comprises the following steps:
and calculating the flow consumption rate value of each time of adding redundant data packets, encoding and compressing according to the average time consumption and the average flow consumption value, wherein the calculation method is as follows:
Figure QLYQS_5
Wherein v is l Representing the value of the flow consumption rate, t, of performing the addition of redundant data packets, encoding or compression l Representing the average time taken to perform the operations of adding redundant packets, encoding or compressing, F l Representing an average traffic consumption value when performing the addition of the redundant data packet, encoding or compression operation;
solving the average value of the flow consumption rate values added with redundant data packets, codes and compression to obtain the flow consumption rate value of the compressed audio-video set;
the calculating the cache packet loss rate of the compressed audio and video set comprises the following steps:
and calculating to obtain the cache packet loss rate according to the following steps:
Figure QLYQS_6
wherein P represents the buffer packet loss rate when the compressed audio and video set is transmitted to the terminal webpage end to perform buffer memory, N represents the total packet number of the compressed audio and video set, eta (v) represents the packet loss rate of each compressed audio and video set transmitted to the terminal webpage end on the premise of generating the flow consumption rate value v of the compressed audio and video set, f (tau) represents a random probability function, wherein the random probability function can use a Gaussian probability function, and tau represents the signal strength value of the interaction server and the terminal webpage end when the compressed audio and video set is transmitted each time;
the adjusting the transmission strategy of the next transmission of the starting point server according to the buffer packet loss rate comprises the following steps:
And when the starting point server transmits next time, constructing a flow consumption rate value of the compressed audio and video set according to the following calculation:
Figure QLYQS_7
wherein v is s The flow consumption rate value provided by the compressed audio and video set is constructed when the starting point server transmits next time, v is the flow consumption rate value provided by the compressed audio and video set is constructed when the starting point server transmits last time, P is the buffer packet loss rate corresponding to v, and delta is a preset weight factor.
CN202310310836.0A 2023-03-28 2023-03-28 Method and device for realizing audio and video interaction of webpage end based on webrtc Active CN116016458B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310310836.0A CN116016458B (en) 2023-03-28 2023-03-28 Method and device for realizing audio and video interaction of webpage end based on webrtc

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310310836.0A CN116016458B (en) 2023-03-28 2023-03-28 Method and device for realizing audio and video interaction of webpage end based on webrtc

Publications (2)

Publication Number Publication Date
CN116016458A CN116016458A (en) 2023-04-25
CN116016458B true CN116016458B (en) 2023-06-23

Family

ID=86025282

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310310836.0A Active CN116016458B (en) 2023-03-28 2023-03-28 Method and device for realizing audio and video interaction of webpage end based on webrtc

Country Status (1)

Country Link
CN (1) CN116016458B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111612639A (en) * 2020-05-21 2020-09-01 青岛窗外科技有限公司 Synchronous communication method and system applied to insurance scheme
CN112437319A (en) * 2020-11-10 2021-03-02 杭州叙简科技股份有限公司 Method for switching multiple video streams based on webrtc
CN113868573A (en) * 2021-09-07 2021-12-31 青岛希望鸟科技有限公司 Method and system for quickly establishing one-screen interaction based on webpage

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9391909B2 (en) * 2014-02-25 2016-07-12 Intel Corporation Apparatus, method and system of rate adaptation based on goodput
CN104253814B (en) * 2014-09-12 2018-02-23 清华大学 A kind of Streaming Media processing method, server and browser
CN105872440A (en) * 2015-12-15 2016-08-17 乐视致新电子科技(天津)有限公司 Inputting method and device of audio and video information, network television and user equipment
US10608920B2 (en) * 2017-01-06 2020-03-31 Martello Technologies Corporation Performance testing audio/video communications in network
CN112839192A (en) * 2021-01-20 2021-05-25 青岛以萨数据技术有限公司 Audio and video communication system and method based on browser
CN113613032A (en) * 2021-08-04 2021-11-05 杭州梦视网络科技有限公司 Video transmission method of embedded remote teaching experiment system
CN114070939A (en) * 2021-12-28 2022-02-18 宝东信息技术有限公司 Network voice call method, system, storage medium and server
CN115334059A (en) * 2022-08-10 2022-11-11 北京飞讯数码科技有限公司 Audio and video intercommunication method, device, equipment and storage medium

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111612639A (en) * 2020-05-21 2020-09-01 青岛窗外科技有限公司 Synchronous communication method and system applied to insurance scheme
CN112437319A (en) * 2020-11-10 2021-03-02 杭州叙简科技股份有限公司 Method for switching multiple video streams based on webrtc
CN113868573A (en) * 2021-09-07 2021-12-31 青岛希望鸟科技有限公司 Method and system for quickly establishing one-screen interaction based on webpage

Also Published As

Publication number Publication date
CN116016458A (en) 2023-04-25

Similar Documents

Publication Publication Date Title
CN109194647B (en) Data transmission method and device, electronic equipment and storage medium
WO2021088964A1 (en) Inference system, inference method, electronic device and computer storage medium
CN112804707A (en) Data transmission method and device, computer readable medium and electronic equipment
US9042225B2 (en) Minimizing power consumption in a network device
CN112787945B (en) Data transmission method, data transmission device, computer readable medium and electronic equipment
US20230118176A1 (en) Data transmission method and apparatus, computer-readable storage medium, electronic device, and computer program product
CN109756568A (en) Processing method, equipment and the computer readable storage medium of file
CN109120687B (en) Data packet transmitting method, device, system, equipment and storage medium
CN110868276A (en) Data transmission method and system for Internet of things equipment and electronic equipment
CN113117326B (en) Frame rate control method and device
CN110831039A (en) Data transmission method and transmission server in multi-path concurrent system
CN116016458B (en) Method and device for realizing audio and video interaction of webpage end based on webrtc
CN113961289A (en) Data processing method, device, equipment and storage medium
CN112104867B (en) Video processing method, video processing device, intelligent equipment and storage medium
CN112769788A (en) Charging service data processing method and device, electronic equipment and storage medium
CN106686635A (en) Data transmission method and device based on control and provisioning of wireless access points protocol specification
CN116033235B (en) Data transmission method, digital person production equipment and digital person display equipment
WO2017020512A1 (en) Data transmission method, data transmission system, and portable display device
CN113225830B (en) Data network uplink scheduling method and device and electronic equipment
CN110909359A (en) Communication method based on dual-system architecture and terminal equipment
CN114760309A (en) Business interaction method, device, equipment and medium of terminal based on cloud service
CN114363379A (en) Vehicle data transmission method and device, electronic equipment and medium
CN111432357B (en) Information processing method and system and computing equipment
CN114202947B (en) Internet of vehicles data transmission method and device and automatic driving vehicle
CN111740956B (en) Vehicle communication method, device, computer readable medium and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant