CN108881780B - Method and server for dynamically adjusting definition mode in video call - Google Patents
Method and server for dynamically adjusting definition mode in video call Download PDFInfo
- Publication number
- CN108881780B CN108881780B CN201810784414.6A CN201810784414A CN108881780B CN 108881780 B CN108881780 B CN 108881780B CN 201810784414 A CN201810784414 A CN 201810784414A CN 108881780 B CN108881780 B CN 108881780B
- Authority
- CN
- China
- Prior art keywords
- terminal
- rate data
- video coding
- image frame
- frame rate
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 57
- 230000004048 modification Effects 0.000 claims abstract description 75
- 238000012986 modification Methods 0.000 claims abstract description 75
- 230000003828 downregulation Effects 0.000 claims description 33
- 230000003827 upregulation Effects 0.000 claims description 31
- 238000004891 communication Methods 0.000 claims description 15
- 230000002222 downregulating effect Effects 0.000 claims description 8
- 230000005540 biological transmission Effects 0.000 description 5
- 238000010586 diagram Methods 0.000 description 5
- 230000006870 function Effects 0.000 description 5
- 230000008569 process Effects 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 3
- 230000003044 adaptive effect Effects 0.000 description 2
- 230000002457 bidirectional effect Effects 0.000 description 2
- 230000000903 blocking effect Effects 0.000 description 2
- 230000006835 compression Effects 0.000 description 2
- 238000007906 compression Methods 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 238000012545 processing Methods 0.000 description 2
- 238000006243 chemical reaction Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000011084 recovery Methods 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/14—Systems for two-way working
- H04N7/141—Systems for two-way working between two video terminals, e.g. videophone
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/25—Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
- H04N21/266—Channel or content management, e.g. generation and management of keys and entitlement messages in a conditional access system, merging a VOD unicast channel into a multicast channel
- H04N21/2662—Controlling the complexity of the video stream, e.g. by scaling the resolution or bitrate of the video stream based on the client capabilities
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/47—End-user applications
- H04N21/478—Supplemental services, e.g. displaying phone caller identification, shopping application
- H04N21/4788—Supplemental services, e.g. displaying phone caller identification, shopping application communicating with other users, e.g. chatting
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- General Engineering & Computer Science (AREA)
- Databases & Information Systems (AREA)
- Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
- Data Exchanges In Wide-Area Networks (AREA)
Abstract
The application discloses a method and a server for dynamically adjusting a definition mode in a video call, wherein in the method, a second terminal receives a remote video data stream uploaded by a first terminal and forwarded by the server, and returns image frame rate data and audio code rate data in the remote video data stream and packet loss rate of a link from the first terminal to the second terminal to the server, and the server determines a video coding parameter modification strategy of the first terminal according to the image frame rate data, the audio code rate data or the packet loss rate and sends the modification strategy to the first terminal, so that the first terminal adjusts video coding parameters of the first terminal according to the video coding parameter modification strategy. In the application, the server can accurately judge the quality of the network environment where the current video call is located according to the image frame rate data, the audio code rate data or the packet loss rate actually received by the second terminal, so that the video coding parameters adjusted by the first terminal can be adapted to the current network environment, and the user experience is improved.
Description
Technical Field
The present application relates to the field of video call technologies, and in particular, to a method and a server for dynamically adjusting a definition mode in a video call.
Background
The video call function is a common function of an intelligent terminal device based on a communication network. The state of the communication network and the performance of the intelligent terminal device are important factors influencing the call quality. For example, the data processing capability of the intelligent terminal device, the set video coding parameters (code rate, video frame rate, video resolution, etc.), and the bandwidth and signal strength of the communication network may affect the definition, smoothness, and call quality of the video image. The bit rate is the data flow used by a video or audio file in a unit time, and is an important factor for controlling the video quality because the bit rate parameter has a direct relation with the size of the encoded video file.
As shown in fig. 1, in the process of establishing a video call connection between two intelligent terminal devices, a calling device sends a video call request to a called device through a server, and when a communication network state is good, the called device receives the call request, and then the call connection between the two devices is successfully established. At the moment, the devices at the two ends respectively shoot down pictures, encode the pictures according to the default video encoding parameters and transmit the coded pictures to the opposite-end device, the opposite-end device decodes the received remote video data stream and further displays the video pictures, and the process is repeated to realize the end-to-end video communication. Wherein, the video coding parameters are kept unchanged in the whole call process. Because the data stream transmission between the two intelligent terminal devices is bidirectional, any one device is not only a data sending end, but also a data receiving end.
The video call implementation method at least has the following defects: if the video coding parameter preset by the equipment at one end is higher and is suitable for higher definition, but the network quality is not good, the phenomena of frequent blocking, frame loss and the like occur, and the conversation quality is reduced; if the video coding parameters preset by the equipment at one end are lower and are suitable for lower definition, but the network quality is better, equipment resources and network resources are wasted, and the user experience is influenced. Therefore, how to balance the relationship between the network quality and the video definition in the video call to make the user experience more friendly becomes a technical problem to be solved urgently.
Disclosure of Invention
The application provides a method and a server for dynamically adjusting definition mode in video call, so as to balance network quality and video definition and improve call quality and user experience.
In a first aspect, the present application provides a method for dynamically adjusting a sharpness mode in a video call, where the method includes:
the server forwards the remote video data stream uploaded by the first terminal to the second terminal through a communication network;
receiving image frame rate data and audio code rate data in the remote video data stream returned by the second terminal, and the packet loss rate of a link from the first terminal to the second terminal;
and determining a video coding parameter modification strategy of the first terminal according to the image frame rate data, the audio code rate data or the packet loss rate, and sending the video coding parameter modification strategy to the first terminal so that the first terminal adjusts the video coding parameters according to the video coding parameter modification strategy.
With reference to the first aspect, in a first possible implementation manner of the first aspect, the image frame rate data, the audio rate data, and the packet loss rate are sent by the second terminal when determining that the connection state of the current call is the call state according to the image frame rate data and the audio rate data in the remote video data stream.
With reference to the first aspect, in a second possible implementation manner of the first aspect, the determining a video coding parameter modification policy of the first terminal according to the image frame rate data, the audio rate data, or the packet loss rate includes:
judging whether the image frame rate data, the audio code rate data or the packet loss rate meet an up-regulation definition condition or a down-regulation definition condition preset in a current definition mode of a second terminal;
if the image frame rate data, the audio code rate data or the packet loss rate meet the up-regulation definition condition, up-regulating video coding parameters of a first terminal;
if the image frame rate data, the audio code rate data or the packet loss rate meet the down-regulation definition condition, down-regulating video coding parameters of a first terminal;
and if the image frame rate data, the audio code rate data or the packet loss rate does not meet the up-regulation definition condition and the down-regulation definition condition, keeping the current video coding parameters of the first terminal unchanged.
In a second aspect, the present application provides a method for dynamically adjusting a sharpness mode in a video call, the method comprising:
the first terminal sends a remote video data stream to the second terminal through the server;
receiving a video coding parameter modification strategy sent by a server and determined by the server according to image frame rate data, audio code rate data or packet loss rate of a link from a first terminal to a second terminal, wherein the image frame rate data and the audio code rate data are returned by the second terminal;
and adjusting the video coding parameters according to the video coding parameter modification strategy so as to adjust the definition mode of the second terminal.
With reference to the second aspect, in a first possible implementation manner of the second aspect, the step of adjusting the video coding parameter according to the video coding parameter modification policy includes: and adjusting the video coding parameters according to the video coding parameter modification strategy and the current definition mode corresponding to the current video coding parameters.
With reference to the first possible implementation manner of the second aspect, in a second possible implementation manner of the second aspect, the step of adjusting the video coding parameter according to the video coding parameter modification policy and the current sharpness mode corresponding to the current video coding parameter includes:
if the current definition mode is the definition mode with the lowest level and the video coding parameter modification strategy is the down-regulation video coding parameter, or the current definition mode is the definition mode with the highest level and the video coding parameter modification strategy is the up-regulation video coding parameter, the first terminal keeps the current video coding parameter unchanged;
and if the current definition mode is not the definition mode of the lowest level and is not the definition mode of the highest level, the first terminal adjusts the video coding parameters according to the video coding parameter modification strategy.
In a third aspect, the present application provides a method for dynamically adjusting a sharpness mode in a video call, where the method includes:
the second terminal receives remote video data stream information uploaded by the first terminal and forwarded by the server;
acquiring image frame rate data and audio code rate data in the remote video data stream information;
judging the connection state of the current call according to the image frame rate data and the audio code rate data;
and if the current call connection state is the call state, sending the image frame rate data, the audio code rate data and the packet loss rate of the link from the first terminal to the second terminal to a server.
With reference to the third aspect, in a first possible implementation manner of the third aspect, the determining a connection state of a current call according to the image frame rate data and the audio rate data includes:
and judging the connection state of the current call according to whether the image frame rate data is greater than or equal to a preset frame rate threshold or not, or according to whether the audio code rate data is greater than or equal to a preset code rate threshold or not.
With reference to the third aspect, in a second possible implementation manner of the third aspect, the determining a connection state of a current call according to the image frame rate data and the audio rate data includes:
and judging the connection state of the current call according to whether the image frame rate data is lower than a preset frame rate threshold, and whether the duration time lower than the preset frame rate threshold is greater than a preset first time threshold, or according to whether the audio code rate data is lower than a preset code rate threshold, and whether the duration time lower than the preset code rate threshold is greater than a preset second time threshold.
In a fourth aspect, the present application provides a server comprising a memory and a processor, the processor configured to:
forwarding the remote video data stream uploaded by the first terminal to the second terminal through a communication network;
receiving image frame rate data and audio code rate data in the remote video data stream returned by the second terminal, and the packet loss rate of a link from the first terminal to the second terminal;
and determining a video coding parameter modification strategy of the first terminal according to the image frame rate data, the audio code rate data or the packet loss rate, and sending the video coding parameter modification strategy to the first terminal, so that the first terminal adjusts the video coding parameters according to the video coding parameter modification strategy.
According to the technical scheme, the method for dynamically adjusting the definition mode in the video call and the server provided by the application have the advantages that the second terminal receives the remote video data stream uploaded by the first terminal and forwarded by the server, the image frame rate data and the audio code rate data in the remote video data stream and the packet loss rate of the link from the first terminal to the second terminal are returned to the server, the server determines the video coding parameter modification strategy of the first terminal according to the image frame rate data, the audio code rate data or the packet loss rate, and sends the modification strategy to the first terminal, so that the first terminal adjusts the video coding parameter of the first terminal according to the video coding parameter modification strategy. In the application, the server can accurately judge the quality of the network environment where the current video call is located according to the image frame rate data, the audio code rate data or the packet loss rate actually received by the second terminal, so that the video coding parameters adjusted by the first terminal can be adapted to the current network environment, and the user experience is improved.
Drawings
In order to more clearly explain the technical solution of the present application, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious to those skilled in the art that other drawings can be obtained according to the drawings without any creative effort.
Fig. 1 is a schematic view of an application scenario according to an embodiment of the present application;
FIG. 2 is a block diagram of a video call system implementing the method of the present application;
FIG. 3 is a flowchart illustrating a method for dynamically adjusting sharpness mode in a video call according to an exemplary embodiment of the present application;
fig. 4 is a flowchart of a refinement method for determining a video coding parameter modification policy of a first terminal in an exemplary embodiment of the present application;
FIG. 5 is a flow chart illustrating another method for dynamically adjusting sharpness pattern for a video call according to an exemplary embodiment of the present application;
fig. 6 is a flowchart of a refinement method for adjusting video coding parameters according to a video coding parameter modification policy by a first terminal in an exemplary embodiment of the present application;
FIG. 7 is a flow chart illustrating yet another method for dynamically adjusting sharpness mode in a video call according to an illustrative embodiment of the present application;
FIG. 8 is a diagram of an application scenario according to an embodiment of the present application;
FIG. 9 is a flow chart illustrating yet another method for dynamically adjusting sharpness patterns in a video call in accordance with an illustrative embodiment of the present application;
FIG. 10 is a block diagram illustrating an apparatus for dynamically adjusting sharpness patterns in a video call according to an example embodiment of the present application;
fig. 11 is a block diagram of an apparatus for dynamically adjusting sharpness mode in a video call according to an example embodiment of the present application.
Detailed Description
In order to make those skilled in the art better understand the technical solution of the present invention, the technical solution in the embodiment of the present invention will be clearly and completely described below with reference to the drawings in the embodiment of the present invention, and it is obvious that the described embodiment is only a part of the embodiment of the present invention, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
With the popularization of a series of intelligent terminal devices such as smart phones, smart televisions and tablet computers and the development of network technologies such as wireless network technologies, more and more mobile users surf the internet through the intelligent terminal devices. The intelligent terminal equipment is provided with a client capable of realizing complicated functions, wherein the video call function is a common function and can be realized through clients and servers such as Wechat and QQ.
In the present application, a block diagram of a video call system for implementing the method of the present invention is shown in fig. 2. We pre-build or download a client application that allows two users to communicate, e.g., video communication based on the WebRTC protocol, on the first terminal 210 and the second terminal 220, respectively, using the application. In addition, the first terminal 210 and the second terminal 220 are configured with codecs, such as h.264 encoders, and other necessary functional modules (not shown). After the video call connection between the first terminal 210 and the second terminal 220 is successful, the first terminal 210 and the second terminal 220 respectively take pictures forming a video sequence, perform video coding according to the default video coding parameters of the codec, and upload the pictures to the server 230. The video coding parameters comprise an image frame rate, a code rate, an image resolution and the like.
It should be noted that one frame is a still picture, and consecutive frames form a video animation. A video sequence consists of successive frame images, the frame rate being the number of image frames contained per second, usually denoted by fps. The higher the frame rate, the more the number of frame images per second, the smoother and more vivid animation can be obtained, the smoother the video sequence sensed by human eyes, and the better the video effect. However, the improvement of the frame rate can greatly increase the amount of video data to be transmitted, and once the network quality fluctuates, frames are frequently lost, so that the phenomenon of video picture blocking occurs.
The code rate is also called code stream, which refers to the data flow transmitted in unit time when transmitting video files, and is an important part for controlling picture quality in video coding. Under the same resolution, the larger the code rate of the video file is, the smaller the compression ratio is, and the higher the picture quality is. For an audio file, the larger the code rate, the smaller the compression ratio, the smaller the sound quality loss, and the closer to the sound quality of the sound source.
During the process of the video call between the first terminal and the second terminal, the definition mode of the video call is dynamically adjusted by the method shown in fig. 3.
In fact, since the data stream transmission between the first terminal and the second terminal is bidirectional, both the first terminal and the second terminal are data transmitting terminals and data receiving terminals. In the embodiments of the present application, the technical solution of the present application is described in terms of data transmission from a first terminal to a second terminal.
In step S110, the server forwards the remote video data stream uploaded by the first terminal to the second terminal through the communication network;
the first terminal shoots an image, performs video coding according to the default video coding parameters of the equipment, for example, performs video coding according to the video coding parameters corresponding to the highest definition, and uploads the data to the server. The server forwards the remote video data stream from the first terminal to the second terminal, and the second terminal receives the remote video data stream forwarded by the server.
It should be noted that, in data transmission based on a communication network (wired network or wireless network), network bandwidth and signal strength are important factors affecting transmission quality. However, due to the dynamic fluctuation characteristics of the network quality, the packet loss phenomenon is difficult to avoid. Therefore, in the remote video data stream actually received by the second terminal, the video frame rate data and the audio code rate data are not necessarily equal to the video frame rate data and the audio code rate data output by the encoder of the first terminal, and meanwhile, the video frame rate data, the audio code rate data and the packet loss rate of the link from the first terminal to the second terminal actually received by the second terminal have a direct relationship with the quality of the current network environment, for example, if the network quality is not good, the frame loss is serious, and the video call occurs a frequent pause phenomenon.
In step S120, the server receives the video frame rate data and the audio rate data in the remote video data stream returned by the second terminal, and the packet loss rate of the link from the first terminal to the second terminal;
in this embodiment, the second terminal obtains the video frame rate data and the audio code rate data in the remote video data stream, and the packet loss rate of the link from the first terminal to the second terminal, and returns the data to the server. The server judges the current network quality condition according to the image frame rate data, the audio code rate data or the packet loss rate, and further realizes the dynamic adjustment of the video call definition.
It should be noted that, in this embodiment, the video frame rate data, the audio rate data, and the packet loss rate are sent by the second terminal when determining that the connection state of the current call is the call state according to the video frame rate data and the audio rate data in the remote video data stream.
It should be noted that the connection state of the current call includes a call state and an interruption state. And the second terminal judges the connection state of the current call according to the actually received image frame rate data and audio code rate data.
Specifically, the second terminal may determine the connection state of the current call according to whether the image frame rate data is greater than or equal to a preset frame rate threshold, or according to whether the audio code rate data is greater than or equal to a preset code rate threshold. The connection state of the current call can also be judged according to whether the image frame rate data is lower than a preset frame rate threshold, and whether the duration time lower than the preset frame rate threshold is greater than a preset first time threshold, or according to whether the audio code rate data is lower than a preset code rate threshold, and whether the duration time lower than the preset code rate threshold is greater than a preset second time threshold.
In step S130, the server determines a video coding parameter modification policy of the first terminal according to the image frame rate data, the audio rate data, or the packet loss rate, and sends the video coding parameter modification policy to the first terminal, so that the first terminal adjusts the video coding parameter according to the video coding parameter modification policy.
The video coding parameter modification strategy comprises the steps of adjusting up video coding parameters, adjusting down video coding parameters and keeping the current coding parameters unchanged.
It should be noted that, the first terminal and the second terminal preset several sets of selectable video coding parameters and definition modes corresponding to each set of video coding parameters according to their own CPU processing capabilities and coding capabilities of the codecs, so that when one end device adjusts the video coding parameters of the codecs, the definition modes of the opposite end are correspondingly switched, and when one end device switches the definition modes, the video coding parameters of the opposite end codec are correspondingly adjusted.
Illustratively, the sets of selectable video coding parameters and their corresponding sharpness modes are as follows.
Based on this, referring to fig. 4, in some preferred embodiments, the step of determining, by the server, a video coding parameter modification policy of the first terminal according to the image frame rate data, the audio rate data, or the packet loss rate specifically includes:
step S131, judging whether the image frame rate data, the audio code rate data or the packet loss rate meet an up-regulation definition condition or a down-regulation definition condition preset in a current definition mode of the second terminal;
it should be noted that, because the video coding parameter of the first terminal corresponds to the definition mode of the second terminal, that is, when the current definition mode of the second terminal is higher, the first terminal performs video coding with the higher video coding parameter, at this time, because the size of the coded video file is larger, packet loss is more likely to occur in the same network state. When the current definition mode of the second terminal is lower, the first terminal performs video coding with lower video coding parameters, and at this time, because the volume of the coded video file is smaller, the packet loss is not easy to occur in the same network state.
For the current definition mode of the second terminal, it is assumed that the first terminal performs video coding at the frame rate a1, the code rate B1 and the resolution C1, if the network quality at this time is better, the difference between the image frame rate data a2 and the audio code rate data B2 actually received by the second terminal and the frame rate a1 and the code rate B1 data output by the encoder of the first terminal is smaller, and packet loss is not serious, and at this time, the method tries to further adjust up the definition mode, so as to improve the utilization rate of network resources and device resources, and improve the video call experience of the user at the same time. On the contrary, if the network quality is poor at this time, the difference between the image frame rate data a2 and the audio rate data B2 actually received by the second terminal and the frame rate a1 and the rate B1 output by the encoder of the first terminal is small, and packet loss is serious, at this time, the method of the present application tries to adjust the definition mode downward to ensure the fluency of the video call.
Based on this, in step S131, an up-resolution condition and a down-resolution condition are preset in the resolution mode of each level, and the up-resolution condition and the down-resolution condition may be determined by setting an up-threshold and a down-threshold of a packet loss rate, or an up-threshold and a down-threshold of an image frame rate, or an up-threshold and a down-threshold of an audio code rate.
For example, for the high definition mode, the condition that the packet loss rate is less than 10% (up-regulation threshold) is the up-regulation definition condition, the condition that the packet loss rate is greater than 30% (down-regulation threshold) is the down-regulation definition condition, and when the packet loss rate is between 10% and 30%, the current definition mode is kept unchanged.
Alternatively, the up or down resolution condition is determined by the ratio or difference of the image frame rate data actually received by the second terminal and the image frame rate data output by the codec of the first terminal. For example, an average number of frames lost per second of less than 5 frames (up threshold) is an up definition condition, and an average number of frames lost per second of greater than 15 frames (down threshold) is a down definition condition.
Or, determining the up-regulation or down-regulation definition condition according to the ratio or difference of the audio code rate data actually received by the second terminal and the audio code rate data output by the codec of the first terminal. For example, an average code rate per second decrease of less than 10% (up-threshold) is an up-resolution condition, and an average code rate per second decrease of more than 40% (down-threshold) is a down-resolution condition.
It should be noted that the above-mentioned examples of the exact numerical values are only for explaining the technical solutions of the present application, and do not limit the scope of the claims of the present application.
In step S132, if the image frame rate data, the audio rate data, or the packet loss rate satisfies the down-regulation definition condition, down-regulating a video encoding parameter of the first terminal;
in step S133, if the image frame rate data, the audio rate data, or the packet loss rate satisfies the down-regulation definition condition, down-regulating a video encoding parameter of the first terminal;
in step S134, if the image frame rate data, the audio rate data, or the packet loss rate does not satisfy the up-resolution condition and does not satisfy the down-resolution condition, the current video encoding parameter of the first terminal is kept unchanged.
In the present application, the sharpness mode is preferably adjusted step by step. For example, if the video frame rate data, the audio rate data, or the packet loss rate satisfies the definition up-conversion condition, and the current definition mode is standard definition, it is preferable to up-convert the definition to high definition, and up-convert the video coding parameters to the video coding parameters corresponding to the high definition mode.
In the method for dynamically adjusting the definition in the video call provided by this embodiment, the server forwards the remote video data stream uploaded by the first terminal to the second terminal through the communication network, receives the image frame rate data and the audio code rate data in the remote video data stream returned by the second terminal, and the packet loss rate of the link from the first terminal to the second terminal, determines the video coding parameter modification policy of the first terminal according to the image frame rate data, the audio code rate data, or the packet loss rate, and sends the modification policy to the first terminal, so that the first terminal adjusts its video coding parameter according to the video coding parameter modification policy. In the application, the server can accurately judge the quality of the network environment where the video call is located according to the image frame rate data, the audio code rate data and the packet loss rate which are actually received by the second terminal, so that the video coding parameters adjusted by the first terminal are adaptive to the current network environment, for example, if the network quality is not good, the definition mode is reduced, the video call is prevented from being blocked, and if the network quality is good, the definition mode is improved, the video picture is optimized, and the user experience is improved.
Referring to fig. 5, in another embodiment of the present application, a method for dynamically adjusting a sharpness mode in a video call, applied to a first terminal, includes:
step S210, the first terminal sends a remote video data stream to the second terminal through the server;
step S220, receiving a video coding parameter modification strategy sent by the server and determined by the server according to the image frame rate data, the audio code rate data or the packet loss rate of the link from the first terminal to the second terminal, in the remote video data stream returned by the second terminal;
and step S230, adjusting the video coding parameters according to the video coding parameter modification strategy so as to adjust the definition mode of the second terminal.
In some preferred embodiments, the first terminal receives a video coding parameter modification policy issued by the server, and adjusts its video coding parameter according to the video coding parameter modification policy and a current definition mode corresponding to the current video coding parameter, that is, the current definition mode of the second terminal.
Specifically referring to fig. 6, in step S231, the first terminal determines whether the current definition mode is the definition mode of the lowest level and the video coding parameter modification policy is the down-regulation video coding parameter, or whether the current definition mode is the definition mode of the highest level and the video coding parameter modification policy is the up-regulation video coding parameter;
it is understood that the lowest level of sharpness mode determines the adjustable upper limit of the video coding parameters of the device, and the highest level of sharpness mode determines the adjustable lower limit of the video coding parameters of the device. Therefore, if the current definition mode level is the lowest, the current video coding parameter is the lowest, and the current video coding parameter cannot be adjusted downwards; if the current sharpness mode level is the highest, the current video coding parameter is the highest, and the current video coding parameter cannot be adjusted up.
Therefore, in step S232, if the current definition mode is the definition mode of the lowest level and the video coding parameter modification policy is the down-regulation video coding parameter, or the current definition mode is the definition mode of the highest level and the video coding parameter modification policy is the up-regulation video coding parameter, the first terminal keeps the current video coding parameter unchanged;
otherwise, in step S233, if the current definition mode is not the lowest level definition mode and not the highest level definition mode, the first terminal adjusts the video coding parameter according to the video coding parameter modification policy.
In the method for dynamically adjusting the definition mode in the video call provided by this embodiment, the first terminal adjusts the video coding parameter of the first terminal according to the video coding parameter modification policy issued by the server, so that the first terminal is adapted to the dynamically fluctuating network quality, and the video call quality and the user experience are improved.
Referring to fig. 7, in still other embodiments of the present application, a method for dynamically adjusting a sharpness mode in a video call applied to a second terminal includes:
step S310, the second terminal receives remote video data stream information uploaded by the first terminal and forwarded by the server;
step S320, obtaining image frame rate data and audio code rate data in the remote video data stream information;
step S330, judging the connection state of the current call according to the image frame rate data and the audio code rate data;
it should be noted that the connection state of the current call includes a call state and an interruption state. And the second terminal judges the connection state of the current call according to the actually received image frame rate data and audio code rate data.
In this embodiment, an implementation method of step S330 is to determine a connection state of the current call according to whether the image frame rate data is greater than or equal to a preset frame rate threshold, or according to whether the audio code rate data is greater than or equal to a preset code rate threshold.
Specifically, first, acquiring state information of a camera of a first terminal, wherein the state information comprises an open state and a closed state; if the camera of the first terminal is in an open state and the image frame rate data is greater than or equal to a preset frame rate threshold, the connection state of the current call is a call state, otherwise, the connection state of the current call is an interrupt state; or, if the camera of the first terminal is in a closed state (only voice), and the audio code rate data is greater than or equal to a preset code rate threshold, the connection state of the current call is a call state, otherwise, the connection state of the current call is an interruption state.
In this embodiment, another implementation method of step S330 is to determine the connection state of the current call according to whether the image frame rate data is lower than a preset frame rate threshold, and whether the duration time lower than the preset frame rate threshold is greater than a preset first time threshold, or according to whether the audio code rate data is lower than a preset code rate threshold, and whether the duration time lower than the preset code rate threshold is greater than a preset second time threshold.
Specifically, first, acquiring state information of a camera of a first terminal, wherein the state information comprises an open state and a closed state; if the camera of the first terminal is in an open state, the image frame rate data is lower than a preset frame rate threshold value, and the duration time of the first preset frame rate threshold value is greater than a preset first time threshold value, the connection state of the current call is an interrupt state, otherwise, the connection state of the current call is a call state; or, if the camera of the first terminal is in a closed state, and the audio code rate data is lower than a preset code rate threshold and the duration time lower than the preset code rate threshold is greater than a preset second time threshold, the connection state of the current call is an interrupted state, otherwise, the connection state of the current call is a call state.
For example, if the camera of the first terminal is in an on state, the image frame rate data is 0 (less than a preset frame rate threshold greater than 0), and the duration exceeds 10s (10s is a preset first time threshold) and is not recovered, it is determined that the connection state of the current call is an interrupted state; and if the camera of the first terminal is in a closed state, the audio code rate data is 0 (less than a preset code rate threshold value which is more than 0) and the duration time exceeds 10s (10s is a preset second time threshold value) without recovery, judging that the connection state of the current call is an interruption state.
Step S340, if the current connection state of the call is a call state, sending the image frame rate data, the audio code rate data, and the packet loss rate of the link from the first terminal to the second terminal to the server.
In some preferred embodiments, the method of this embodiment further includes:
and if the connection state of the current call is an interrupted state, outputting first prompt information, wherein the first prompt information is used for prompting a user that the current call is interrupted. For example, referring to fig. 8, the content of the first prompt message is "the current network is not good and the call is interrupted". When the call cannot be continued due to poor network, the automatic hang-up can reduce the trouble of the user and reduce the operation of the user.
Referring to fig. 9, in another preferred embodiment of the present application, the method for dynamically adjusting the sharpness mode in the video call further includes:
step S410, the second terminal judges whether the packet loss rate is larger than a preset packet loss rate upper limit value;
step S420, if the packet loss ratio is greater than the preset upper limit value of the packet loss ratio, outputting a second prompt message, where the second prompt message is used to prompt the user of the current network state.
For example, the upper limit value of the packet loss rate is preset to be 30%; the second terminal obtains the packet loss rate of a link from the first terminal to the second terminal; judging whether the packet loss rate is more than 30%; if the network state is larger than the preset network state, outputting second prompt information to prompt the user of the current network state; the content of the second prompt message may be "the current network state is not good".
According to the method for dynamically adjusting the sharpness mode in the video call, an embodiment of the present application further provides a server, including a memory and a processor, where the processor is configured to:
forwarding the remote video data stream uploaded by the first terminal to the second terminal through a communication network;
receiving image frame rate data and audio code rate data in the remote video data stream returned by the second terminal, and the packet loss rate of a link from the first terminal to the second terminal;
and determining a video coding parameter modification strategy of the first terminal according to the image frame rate data, the audio code rate data or the packet loss rate, and sending the video coding parameter modification strategy to the first terminal, so that the first terminal adjusts the video coding parameters according to the video coding parameter modification strategy.
And the second terminal sends the image frame rate data, the audio code rate data and the packet loss rate when judging that the connection state of the current call is the call state according to the image frame rate data and the audio code rate data in the remote video data stream.
Preferably, the processor is configured to determine the video coding parameter modification policy of the first terminal according to the following steps:
judging whether the image frame rate data, the audio code rate data or the packet loss rate meet an up-regulation definition condition or a down-regulation definition condition preset in a current definition mode of a second terminal;
if the image frame rate data, the audio code rate data or the packet loss rate meet the up-regulation definition condition, up-regulating video coding parameters of a first terminal;
if the image frame rate data, the audio code rate data or the packet loss rate meet the down-regulation definition condition, down-regulating video coding parameters of a first terminal;
and if the image frame rate data, the audio code rate data or the packet loss rate does not meet the up-regulation definition condition and the down-regulation definition condition, keeping the current video coding parameters of the first terminal unchanged.
The server provided in this embodiment forwards the remote video data stream uploaded by the first terminal to the second terminal through the communication network, receives the image frame rate data and the audio code rate data in the remote video data stream returned by the second terminal, and the packet loss rate of the link from the first terminal to the second terminal, determines the video coding parameter modification policy of the first terminal according to the image frame rate data, the audio code rate data, or the packet loss rate, and sends the modification policy to the first terminal, so that the first terminal adjusts its video coding parameter according to the video coding parameter modification policy. According to the image frame rate data, the audio code rate data and the packet loss rate actually received by the second terminal, the quality of the network environment where the video call is located can be accurately judged, so that the video coding parameters adjusted by the first terminal are adaptive to the current network environment, for example, if the network quality is not good, the definition mode is reduced, the video call is prevented from being blocked, and if the network quality is good, the definition mode is improved, the video picture is optimized, and the user experience is improved.
According to the method for dynamically adjusting the definition mode in the video call, the embodiment of the application also provides a device for dynamically adjusting the definition mode in the video call, and the device is arranged in the first terminal. Referring to fig. 10, the apparatus includes:
a data transmitting unit U110, configured to transmit a remote video data stream to the second terminal through the server;
a policy receiving unit U120, configured to receive a video coding parameter modification policy sent by the server and determined by the server according to image frame rate data, audio rate data, or a packet loss rate of a link from the first terminal to the second terminal in the remote video data stream returned by the second terminal;
and a parameter adjusting unit U130, configured to adjust the video coding parameter according to the video coding parameter modification policy, so as to adjust the sharpness mode of the second terminal.
Preferably, the parameter adjusting unit U130 is specifically configured to adjust the video coding parameter according to the video coding parameter modification policy and a current definition mode corresponding to the current video coding parameter.
Preferably, the parameter adjusting unit U130 is specifically configured to, if the current definition mode is the definition mode at the lowest level and the video coding parameter modification policy is the down-regulation video coding parameter, or if the current definition mode is the definition mode at the highest level and the video coding parameter modification policy is the up-regulation video coding parameter, the first terminal keeps the current video coding parameter unchanged;
and if the current definition mode is not the definition mode of the lowest level and is not the definition mode of the highest level, the first terminal adjusts the video coding parameters according to the video coding parameter modification strategy.
According to the method for dynamically adjusting the definition mode in the video call, the embodiment of the application also provides a device for dynamically adjusting the definition mode in the video call, and the device is arranged in the second terminal. Referring to fig. 11, the apparatus includes:
the data receiving unit U210 is configured to receive remote video data stream information uploaded by the first terminal and forwarded by the server;
an obtaining unit U220, configured to obtain image frame rate data and audio code rate data in the remote video data stream information;
the judging unit U230 is configured to judge a connection state of a current call according to the image frame rate data and the audio code rate data;
a sending unit U240, configured to send, if the current connection state of the call is a call state, the image frame rate data, the audio rate data, and a packet loss rate of a link from the first terminal to the second terminal to the server.
Preferably, the determining unit U230 is specifically configured to determine the connection state of the current call according to whether the image frame rate data is greater than or equal to a preset frame rate threshold, or according to whether the audio code rate data is greater than or equal to a preset code rate threshold.
Preferably, the determining unit U230 is specifically configured to determine the connection state of the current call according to whether the image frame rate data is lower than a preset frame rate threshold, and whether the duration time lower than the preset frame rate threshold is greater than a preset first time threshold, or according to whether the audio code rate data is lower than a preset code rate threshold, and whether the duration time lower than the preset code rate threshold is greater than a preset second time threshold.
According to the method and the server for dynamically adjusting the definition in the video call, a video coding parameter modification strategy of a first terminal is determined according to image frame rate data and audio code rate data in a remote video data stream uploaded by the first terminal and actually received by a second terminal or packet loss rate of a link from the first terminal to the second terminal, and the modification strategy is sent to the first terminal, so that the first terminal adjusts video coding parameters of the first terminal according to the video coding parameter modification strategy, the video coding parameters adjusted by the first terminal are adapted to the current network environment, utilization rates of network resources and equipment resources are improved, and user experience is improved.
In a specific implementation, the present invention further provides a computer storage medium, where the computer storage medium may store a program, and the program may include some or all of the steps in each embodiment of the method for dynamically adjusting a sharpness mode in a video call provided by the present invention when executed. The storage medium may be a magnetic disk, an optical disk, a read-only memory (ROM) or a Random Access Memory (RAM).
Those skilled in the art will readily appreciate that the techniques of the embodiments of the present invention may be implemented as software plus a required general purpose hardware platform. Based on such understanding, the technical solutions in the embodiments of the present invention may be essentially or partially implemented in the form of a software product, which may be stored in a storage medium, such as ROM/RAM, magnetic disk, optical disk, etc., and includes several instructions for enabling a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the method according to the embodiments or some parts of the embodiments.
The same and similar parts in the various embodiments in this specification may be referred to each other. In particular, for the server embodiment of the apparatus embodiment, since it is substantially similar to the method embodiment, the description is simple, and for the relevant points, refer to the description in the method embodiment.
The above-described embodiments of the present invention should not be construed as limiting the scope of the present invention.
Claims (9)
1. A method for dynamically adjusting sharpness mode in a video call, the method comprising:
the server forwards the remote video data stream uploaded by the first terminal to the second terminal through a communication network;
receiving image frame rate data and audio code rate data in the remote video data stream returned by the second terminal, and the packet loss rate of a link from the first terminal to the second terminal;
determining a video coding parameter modification strategy of a first terminal according to the image frame rate data, the audio code rate data or the packet loss rate, and sending the video coding parameter modification strategy to the first terminal so that the first terminal adjusts video coding parameters according to the video coding parameter modification strategy;
the step of determining a video coding parameter modification strategy of the first terminal according to the image frame rate data, the audio code rate data or the packet loss rate includes:
judging whether the image frame rate data, the audio code rate data or the packet loss rate meet an up-regulation definition condition or a down-regulation definition condition preset by a current definition mode of a second terminal, wherein the up-regulation definition condition and the down-regulation definition condition preset by different definition modes are different;
if the image frame rate data, the audio code rate data or the packet loss rate meet the up-regulation definition condition, up-regulating video coding parameters of a first terminal;
if the image frame rate data, the audio code rate data or the packet loss rate meet the down-regulation definition condition, down-regulating video coding parameters of a first terminal;
and if the image frame rate data, the audio code rate data or the packet loss rate does not meet the up-regulation definition condition and the down-regulation definition condition, keeping the current video coding parameters of the first terminal unchanged.
2. The method of claim 1, wherein the video frame rate data, the audio rate data, and the packet loss rate are transmitted by the second terminal when the connection status of the current call is determined to be the call status according to the video frame rate data and the audio rate data in the remote video data stream.
3. A method for dynamically adjusting sharpness mode in a video call, the method comprising:
the first terminal sends a remote video data stream to the second terminal through the server;
receiving a video coding parameter modification strategy sent by a server and determined by the server according to image frame rate data, audio code rate data or packet loss rate of a link from a first terminal to a second terminal, wherein the image frame rate data and the audio code rate data are returned by the second terminal;
adjusting video coding parameters according to the video coding parameter modification strategy so as to adjust the definition mode of the second terminal;
the video coding parameter modification strategy is determined by the server according to the image frame rate data, the audio code rate data or the packet loss rate according to the following steps:
judging whether the image frame rate data, the audio code rate data or the packet loss rate meet an up-regulation definition condition or a down-regulation definition condition preset by a current definition mode of a second terminal, wherein the up-regulation definition condition and the down-regulation definition condition preset by different definition modes are different;
if the image frame rate data, the audio code rate data or the packet loss rate meet the up-regulation definition condition, up-regulating video coding parameters of a first terminal;
if the image frame rate data, the audio code rate data or the packet loss rate meet the down-regulation definition condition, down-regulating video coding parameters of a first terminal;
and if the image frame rate data, the audio code rate data or the packet loss rate does not meet the up-regulation definition condition and the down-regulation definition condition, keeping the current video coding parameters of the first terminal unchanged.
4. The method of claim 3, wherein the step of adjusting the video coding parameters according to the video coding parameter modification strategy comprises: and adjusting the video coding parameters according to the video coding parameter modification strategy and the current definition mode corresponding to the current video coding parameters.
5. The method of claim 4, wherein the step of adjusting the video coding parameters according to the video coding parameter modification strategy and the current definition mode corresponding to the current video coding parameters comprises:
if the current definition mode is the definition mode with the lowest level and the video coding parameter modification strategy is the down-regulation video coding parameter, or the current definition mode is the definition mode with the highest level and the video coding parameter modification strategy is the up-regulation video coding parameter, keeping the current video coding parameter unchanged;
and if the current definition mode is not the definition mode of the lowest level and is not the definition mode of the highest level, adjusting the video coding parameters according to the video coding parameter modification strategy.
6. A method for dynamically adjusting sharpness mode in a video call, the method comprising:
the second terminal receives remote video data stream information uploaded by the first terminal and forwarded by the server;
acquiring image frame rate data and audio code rate data in the remote video data stream information;
judging the connection state of the current call according to the image frame rate data and the audio code rate data;
if the current call connection state is a call state, sending the image frame rate data, the audio code rate data and the packet loss rate of a link from the first terminal to the second terminal to a server, wherein the image frame rate data, the audio code rate data and the packet loss rate of the link from the first terminal to the second terminal are used for determining a video coding parameter modification strategy according to the following steps:
the server judges whether the image frame rate data, the audio code rate data or the packet loss rate meet an up-regulation definition condition or a down-regulation definition condition preset by a current definition mode of the second terminal, wherein the up-regulation definition condition and the down-regulation definition condition preset by different definition modes are different;
if the image frame rate data, the audio code rate data or the packet loss rate meet the up-regulation definition condition, up-regulating video coding parameters of a first terminal;
if the image frame rate data, the audio code rate data or the packet loss rate meet the down-regulation definition condition, down-regulating video coding parameters of a first terminal;
and if the image frame rate data, the audio code rate data or the packet loss rate does not meet the up-regulation definition condition and the down-regulation definition condition, keeping the current video coding parameters of the first terminal unchanged.
7. The method of claim 6, wherein determining the connection status of the current call according to the video frame rate data and the audio rate data comprises:
and judging the connection state of the current call according to whether the image frame rate data is greater than or equal to a preset frame rate threshold or not, or according to whether the audio code rate data is greater than or equal to a preset code rate threshold or not.
8. The method of claim 6, wherein determining the connection status of the current call according to the video frame rate data and the audio rate data comprises:
and judging the connection state of the current call according to whether the image frame rate data is lower than a preset frame rate threshold, and whether the duration time lower than the preset frame rate threshold is greater than a preset first time threshold, or according to whether the audio code rate data is lower than a preset code rate threshold, and whether the duration time lower than the preset code rate threshold is greater than a preset second time threshold.
9. A server comprising a memory and a processor, wherein the processor is configured to:
forwarding the remote video data stream uploaded by the first terminal to the second terminal through a communication network;
receiving image frame rate data and audio code rate data in the remote video data stream returned by the second terminal, and the packet loss rate of a link from the first terminal to the second terminal;
determining a video coding parameter modification strategy of a first terminal according to the image frame rate data, the audio code rate data or the packet loss rate, and sending the video coding parameter modification strategy to the first terminal, so that the first terminal adjusts video coding parameters according to the video coding parameter modification strategy;
the step of determining a video coding parameter modification strategy of the first terminal according to the image frame rate data, the audio code rate data or the packet loss rate includes:
judging whether the image frame rate data, the audio code rate data or the packet loss rate meet an up-regulation definition condition or a down-regulation definition condition preset by a current definition mode of a second terminal, wherein the up-regulation definition condition and the down-regulation definition condition preset by different definition modes are different;
if the image frame rate data, the audio code rate data or the packet loss rate meet the up-regulation definition condition, up-regulating video coding parameters of a first terminal;
if the image frame rate data, the audio code rate data or the packet loss rate meet the down-regulation definition condition, down-regulating video coding parameters of a first terminal;
and if the image frame rate data, the audio code rate data or the packet loss rate does not meet the up-regulation definition condition and the down-regulation definition condition, keeping the current video coding parameters of the first terminal unchanged.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810784414.6A CN108881780B (en) | 2018-07-17 | 2018-07-17 | Method and server for dynamically adjusting definition mode in video call |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810784414.6A CN108881780B (en) | 2018-07-17 | 2018-07-17 | Method and server for dynamically adjusting definition mode in video call |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108881780A CN108881780A (en) | 2018-11-23 |
CN108881780B true CN108881780B (en) | 2021-02-12 |
Family
ID=64302771
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810784414.6A Active CN108881780B (en) | 2018-07-17 | 2018-07-17 | Method and server for dynamically adjusting definition mode in video call |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108881780B (en) |
Families Citing this family (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109587511A (en) * | 2018-12-24 | 2019-04-05 | 网易(杭州)网络有限公司 | More equipment net cast methods, equipment, system and storage medium |
CN109756692A (en) * | 2019-01-08 | 2019-05-14 | Oppo广东移动通信有限公司 | Video data handling procedure, device, mobile device, computer readable storage medium |
CN111294546B (en) * | 2019-02-26 | 2022-01-25 | 展讯通信(上海)有限公司 | Resolution adjustment method and device for video call |
CN110087014B (en) * | 2019-04-29 | 2022-04-19 | 努比亚技术有限公司 | Video completion method, terminal and computer-readable storage medium |
CN112118457B (en) * | 2019-06-20 | 2022-09-09 | 腾讯科技(深圳)有限公司 | Live broadcast data processing method and device, readable storage medium and computer equipment |
CN112511782B (en) * | 2019-09-16 | 2024-05-07 | 中兴通讯股份有限公司 | Video conference method, first terminal, MCU, system and storage medium |
CN110996103A (en) * | 2019-12-12 | 2020-04-10 | 杭州叙简科技股份有限公司 | Method for adjusting video coding rate according to network condition |
KR102345503B1 (en) * | 2020-03-03 | 2022-01-03 | 주식회사 하이퍼커넥트 | Mediating Method and Computer Readable Recording Medium |
CN111615006B (en) * | 2020-05-29 | 2022-02-01 | 北京讯众通信技术股份有限公司 | Video code conversion transmission control system based on network state self-evaluation |
CN111866586B (en) * | 2020-07-28 | 2022-08-02 | 精英数智科技股份有限公司 | Underground video data processing method and device, electronic equipment and storage medium |
CN111934823B (en) * | 2020-08-12 | 2022-08-02 | 中国联合网络通信集团有限公司 | Data transmission method, radio access network equipment and user plane functional entity |
CN112565204A (en) * | 2020-11-19 | 2021-03-26 | 北京融讯科创技术有限公司 | Control method and device for video data transmission and computer readable storage medium |
CN112769524B (en) * | 2021-04-06 | 2021-06-22 | 腾讯科技(深圳)有限公司 | Voice transmission method, device, computer equipment and storage medium |
CN113225617A (en) * | 2021-04-28 | 2021-08-06 | 臻迪科技股份有限公司 | Playing video processing method and device and electronic equipment |
CN114040130A (en) * | 2021-11-05 | 2022-02-11 | 深圳市瑞云科技有限公司 | Method, system and computer readable storage medium for dynamically switching single and double sound channels |
CN115086284B (en) * | 2022-05-20 | 2024-06-14 | 阿里巴巴(中国)有限公司 | Streaming media data transmission method of cloud application |
CN116939170B (en) * | 2023-09-15 | 2024-01-02 | 深圳市达瑞电子科技有限公司 | Video monitoring method, video monitoring server and encoder equipment |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105163134A (en) * | 2015-08-03 | 2015-12-16 | 腾讯科技(深圳)有限公司 | Video coding parameter setting method, device and video coding device for live video |
KR20170091913A (en) * | 2016-02-02 | 2017-08-10 | 삼성전자주식회사 | Method and apparatus for providing video service |
Family Cites Families (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN100566401C (en) * | 2006-09-12 | 2009-12-02 | 腾讯科技(深圳)有限公司 | Instant communication video quality regulating method and device |
EP2517469A4 (en) * | 2009-12-22 | 2014-01-15 | Vidyo Inc | System and method for interactive synchronized video watching |
CN105100675B (en) * | 2015-09-11 | 2019-07-09 | Tcl集团股份有限公司 | A kind of quality adjustment method and system of terminal video communication |
CN105471865A (en) * | 2015-11-23 | 2016-04-06 | 苏州工业园区云视信息技术有限公司 | Method for dynamic network state adaptation of video stream |
CN106231353B (en) * | 2016-07-22 | 2019-09-27 | 北京小米移动软件有限公司 | VoIP communication means and device |
CN107666366B (en) * | 2016-07-28 | 2020-02-14 | 华为技术有限公司 | Method, device and system for adjusting coding rate |
CN107734282A (en) * | 2017-08-25 | 2018-02-23 | 北京元心科技有限公司 | Video communication method and device |
-
2018
- 2018-07-17 CN CN201810784414.6A patent/CN108881780B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105163134A (en) * | 2015-08-03 | 2015-12-16 | 腾讯科技(深圳)有限公司 | Video coding parameter setting method, device and video coding device for live video |
KR20170091913A (en) * | 2016-02-02 | 2017-08-10 | 삼성전자주식회사 | Method and apparatus for providing video service |
EP3402186A1 (en) * | 2016-02-02 | 2018-11-14 | Samsung Electronics Co., Ltd. | Method and apparatus for providing image service |
Also Published As
Publication number | Publication date |
---|---|
CN108881780A (en) | 2018-11-23 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108881780B (en) | Method and server for dynamically adjusting definition mode in video call | |
US10728594B2 (en) | Method and apparatus for transmitting data of mobile terminal | |
US10819766B2 (en) | Voice encoding and sending method and apparatus | |
CN105025249B (en) | Video monitoring data transfer control method, device and video monitoring system | |
CN111601118B (en) | Live video processing method, system, device and terminal | |
CN110662114B (en) | Video processing method and device, electronic equipment and storage medium | |
US10171815B2 (en) | Coding manner switching method, transmit end, and receive end | |
CN110769296B (en) | Video code rate self-adaptive adjusting mode based on local cache during transmission | |
US20090234919A1 (en) | Method of Transmitting Data in a Communication System | |
CN112929704B (en) | Data transmission method, device, electronic equipment and storage medium | |
CN110996103A (en) | Method for adjusting video coding rate according to network condition | |
EP2272237B1 (en) | Method of transmitting data in a communication system | |
CN111225209A (en) | Video data plug flow method, device, terminal and storage medium | |
CN113242436B (en) | Live broadcast data processing method and device and electronic equipment | |
CN108881931A (en) | A kind of data buffering method and the network equipment | |
CN102348095A (en) | Method for keeping stable transmission of images in mobile equipment video communication | |
JP5428702B2 (en) | Stream communication system, server device, and client device | |
CN113038543A (en) | QoE value adjusting method and device | |
CN112333414A (en) | Video call method and device, electronic equipment and readable storage medium | |
CN116962613A (en) | Data transmission method and device, computer equipment and storage medium | |
EP4117294A1 (en) | Method and device for adjusting bit rate during live streaming | |
CN106231618B (en) | A kind of method and device sending encoding and decoding renegotiation request | |
US7525914B2 (en) | Method for down-speeding in an IP communication network | |
WO2012151854A1 (en) | Video data transmission method and device | |
CN113453081A (en) | Video transmission method, system, related equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
TR01 | Transfer of patent right | ||
TR01 | Transfer of patent right |
Effective date of registration: 20240724 Address after: 266061 No. 399 Songling Road, Laoshan District, Qingdao, Shandong (A6 3rd floor) Patentee after: QINGDAO JUKANYUN TECHNOLOGY CO.,LTD. Country or region after: China Address before: 266061 Songling Road, Laoshan District, Qingdao, Shandong Province, No. 399 Patentee before: JUHAOKAN TECHNOLOGY Co.,Ltd. Country or region before: China |