CN117729588A - Cache queue adjusting method and electronic equipment - Google Patents
Cache queue adjusting method and electronic equipment Download PDFInfo
- Publication number
- CN117729588A CN117729588A CN202310782580.3A CN202310782580A CN117729588A CN 117729588 A CN117729588 A CN 117729588A CN 202310782580 A CN202310782580 A CN 202310782580A CN 117729588 A CN117729588 A CN 117729588A
- Authority
- CN
- China
- Prior art keywords
- delay
- service
- jitter
- transmission
- data packet
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 68
- 230000005540 biological transmission Effects 0.000 claims abstract description 212
- 239000000872 buffer Substances 0.000 claims abstract description 207
- 230000003139 buffering effect Effects 0.000 claims abstract description 33
- 230000000977 initiatory effect Effects 0.000 claims abstract description 10
- 238000012545 processing Methods 0.000 claims description 42
- 238000004891 communication Methods 0.000 claims description 30
- 238000012790 confirmation Methods 0.000 claims description 9
- 238000004590 computer program Methods 0.000 claims description 5
- 239000006185 dispersion Substances 0.000 description 16
- 238000010586 diagram Methods 0.000 description 15
- 238000007726 management method Methods 0.000 description 15
- 230000008569 process Effects 0.000 description 15
- 238000001514 detection method Methods 0.000 description 12
- 230000006870 function Effects 0.000 description 12
- 238000010295 mobile communication Methods 0.000 description 12
- 230000001934 delay Effects 0.000 description 10
- 230000000694 effects Effects 0.000 description 7
- 239000002699 waste material Substances 0.000 description 7
- 238000013461 design Methods 0.000 description 5
- 230000003287 optical effect Effects 0.000 description 5
- 238000004364 calculation method Methods 0.000 description 4
- 230000008859 change Effects 0.000 description 4
- 230000002159 abnormal effect Effects 0.000 description 3
- 238000013528 artificial neural network Methods 0.000 description 3
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 229920001621 AMOLED Polymers 0.000 description 2
- 230000003416 augmentation Effects 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 239000004973 liquid crystal related substance Substances 0.000 description 2
- 230000007774 longterm Effects 0.000 description 2
- 230000007246 mechanism Effects 0.000 description 2
- 230000002035 prolonged effect Effects 0.000 description 2
- 239000002096 quantum dot Substances 0.000 description 2
- 238000011084 recovery Methods 0.000 description 2
- 230000011664 signaling Effects 0.000 description 2
- 230000001133 acceleration Effects 0.000 description 1
- 238000013529 biological neural network Methods 0.000 description 1
- 210000000988 bone and bone Anatomy 0.000 description 1
- 210000004556 brain Anatomy 0.000 description 1
- 238000005266 casting Methods 0.000 description 1
- 230000019771 cognition Effects 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 238000009499 grossing Methods 0.000 description 1
- 230000036541 health Effects 0.000 description 1
- 229910044991 metal oxide Inorganic materials 0.000 description 1
- 150000004706 metal oxides Chemical class 0.000 description 1
- 230000001537 neural effect Effects 0.000 description 1
- 210000002569 neuron Anatomy 0.000 description 1
- 238000012805 post-processing Methods 0.000 description 1
- 238000004321 preservation Methods 0.000 description 1
- 230000005855 radiation Effects 0.000 description 1
- 238000009877 rendering Methods 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 238000000926 separation method Methods 0.000 description 1
- 230000005236 sound signal Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Landscapes
- Data Exchanges In Wide-Area Networks (AREA)
Abstract
The application provides a buffer queue adjusting method and electronic equipment, which are applied to a receiving end, wherein the method comprises the following steps: responding to the initiating operation of the first service, obtaining a target service type of the first service, and determining a service setting time delay corresponding to the target service type; acquiring the current first transmission time delay of a receiving end and a transmitting end; acquiring frequency band characteristic information of a receiving end and a transmitting end, determining a jitter coefficient based on the frequency band characteristic information, and determining jitter buffer delay based on a jitter instruction value of first transmission delay and the jitter coefficient; the jitter indication value is used for indicating the jitter degree of the first transmission delay; adjusting the length of a buffer queue based on service setting time delay, first transmission time delay, jitter buffer time delay and packet sending interval of a data packet sent by a sending end; the buffer queue is used for buffering the data packets of the first service, and the length of the buffer queue is equal to the number of the data packets which can be buffered in the buffer queue. According to the scheme, the length of the cache queue can be dynamically adjusted according to the network jitter degree and the service setting.
Description
Technical Field
The present disclosure relates to the field of network transmission technologies, and in particular, to a method for adjusting a buffer queue and an electronic device.
Background
During data transmission, delay jitter may occur. At present, a buffer queue mechanism is generally adopted to smooth delay jitter, and in the buffer queue mechanism, a packet delivery period is maintained by pre-storing a data packet into a buffer queue.
However, in this scheme, the length of the buffer queue is relatively fixed. If the network environment and the service type change, delay jitter cannot be smoothed due to too short cache queue, or performance waste is caused due to too long cache queue.
Disclosure of Invention
The embodiment of the application provides a buffer queue adjusting method and electronic equipment, which can jointly adjust the length of a buffer queue according to the network jitter degree and service setting, and can not cause performance waste while smoothing delay jitter.
In order to achieve the above purpose, the embodiments of the present application adopt the following technical solutions:
in a first aspect, a method for adjusting a buffer queue is provided, which can be applied to a receiving end. The receiving end establishes wireless connection with the transmitting end, and the receiving end is used for receiving and caching the data packet post-processing data packet of the first service from the transmitting end so as to execute the first service, and the method comprises the following steps: responding to the initiating operation of the first service, obtaining a target service type of the first service, and determining a service setting time delay corresponding to the target service type; the service setting delay is a time required for the first service to be set for a corresponding data packet from the transmitting end to the receiving end. Acquiring the current first transmission time delay of a receiving end and a transmitting end; the first transmission delay is used for indicating the time spent for transmitting a data packet of the first service from the transmitting end to the receiving end and receiving the corresponding data packet. Determining jitter buffer time delay based on a jitter indication value of the first transmission time delay; the jitter indication value is used for indicating the jitter degree of the first transmission delay, and the jitter buffer delay is used for indicating the time for buffering when a data packet is sent from a sending end to a receiving end under the jitter degree and the corresponding data packet is needed to resist the jitter. Adjusting the length K of a buffer queue based on service setting time delay, first transmission time delay, jitter buffer time delay and packet sending interval of a data packet sent by a sending end; the buffer queue is used for buffering the data packets of the first service, and the length K of the buffer queue is equal to the number of the data packets which can be buffered in the buffer queue.
By adopting the technical scheme, the specific network jitter degree can be detected by acquiring the transmission delay between the receiving end and the transmitting end. And determining the buffer depth of the buffer queue together according to the service setting time delay, the transmission time delay and the network jitter degree, and finally determining the packet delivery strategy through the adjusted buffer queue depth.
In a possible implementation manner of the first aspect, the obtaining a current first transmission delay of the receiving end and the sending end includes: after responding to the initiating operation of the first service, transmitting a transmission frame to a transmitting end at intervals of a first preset time length; the transmission frame includes a first waiting time between a receiving time of a first data packet of the first service received by the receiving end in a first preset duration and a sending time of the transmission frame. Receiving a transmission acknowledgement frame from a transmitting end; the transmission acknowledgement frame includes a first round trip delay, where the first round trip delay is a difference between a second waiting time of the transmitting end and the first waiting time, the first round trip delay is a sum of the first transmission delay and a second transmission delay of the transmission frame, and the second waiting time is a time between a transmitting time of the transmitting end transmitting the first data packet and a receiving time of the transmission frame from the receiving end. Calculating a second round trip delay based on the sending time of the transmission frame and the receiving time of the transmission acknowledgement frame to obtain a second transmission delay of the transmission frame; wherein, the sending time of the transmission frame is the same as the receiving time of the transmission confirmation frame. And calculating the difference value between the first round trip delay and the second transmission delay of the transmission frame to obtain the first transmission delay. Therefore, the receiving end can accurately detect the current transmission delay of the data packet of the first service in real time. And the length of the buffer queue can be calculated more accurately, and delay jitter is smoothed better. In addition, the data frame is not required to be additionally added to detect the transmission delay, so that the power consumption can be reduced, and the electric quantity can be saved.
In a possible implementation manner of the first aspect, the service setting delay includes a minimum fixed value and a maximum fixed value, and the method includes: if the first transmission delay is smaller than the minimum fixed value, setting the service delay as the minimum fixed value; and if the first transmission delay is greater than or equal to the minimum fixed value, setting the delay to be the maximum fixed value by the service. Therefore, the value of the service setting delay is further determined according to the transmission delay calculated in real time, that is, the service setting delay can be dynamically adjusted according to the transmission delay.
In a possible implementation manner of the first aspect, before determining the jitter buffer time delay based on the jitter indication value of the first transmission time delay, the method further includes: and determining a jitter indicating value based on the average deviation value of the first transmission delay in the second preset duration. Wherein, the calculation formula of the average dispersion sigma is as follows:
in the above formula, n is the number of times of transmission delay that can be calculated in the second preset time period,is the average of n transmission delays. The difference between each transmission delay and the average value is the dispersion, and the average dispersion sigma is the average value of the absolute values of the dispersion, so that the jitter degree of the transmission delay in a preset time period can be reflected.
In a possible implementation manner of the first aspect, the jitter buffer time delay is determined based on a jitter indication value of the first transmission time delay, and the method includes: acquiring frequency band characteristic information of a receiving end and a transmitting end, and determining a jitter coefficient based on the frequency band characteristic information; the frequency band characteristic information is used for indicating the relation between frequency points and channels of the receiving end and the sending end, and comprises same-frequency co-channels, different-frequency co-channels and different-frequency different-channels. A jitter buffer delay is determined based on a product of the jitter indication value and the jitter coefficient.
In a possible implementation manner of the first aspect, the method further includes: if the jitter buffer delay is smaller than the packet sending interval, the jitter buffer delay is adjusted to 0. It can be understood that if the calculated jitter buffer delay is smaller than the packet sending interval, the network condition is considered to be better, the network jitter is weaker, and jitter buffer may not be introduced. Therefore, when the network state is good, jitter buffer memory is not introduced, and performance waste is not caused. The receiving end can determine the service buffering time delay only according to the service requirement, namely the service setting time delay, and further adjust the length of the buffering queue.
In a possible implementation manner of the first aspect, the target service type is further provided with a corresponding jitter buffer delay threshold, and the method includes: if the jitter buffer time delay is greater than the jitter buffer time delay threshold, the jitter buffer time delay is the jitter buffer time delay threshold. That is, in this scheme, a corresponding maximum limit is set for the jitter buffer delay of each service, so as to avoid the calculated jitter buffer delay from being too large and exceeding the service setting capability.
In a possible implementation manner of the first aspect, the adjusting the length K of the buffer queue based on the service setting delay, the first transmission delay, the jitter buffer delay, and a packet interval of the sending end for sending the data packet includes: based on the service set time delay T1, the first transmission time delay T2, the jitter buffer time delay T3 and the packet sending interval S of the data packet sent by the sending end, the following formula is adopted: k= (T1-t2+t3)/S, and the length K of the cache queue is calculated. Therefore, the scheme can detect the specific network jitter degree according to the real-time and accurate first transmission time delay of the data packet, and dynamically adjust the length of the buffer queue according to the network jitter degree and the service setting time delay.
In a possible implementation manner of the first aspect, after the setting of the delay, the first transmission delay, the jitter buffer delay, and the packet interval of the sending end for sending the data packet based on the service, adjusting the length K of the buffer queue, the method further includes: the method comprises the steps of obtaining the actual number K of data packets currently cached in a cache queue, and adjusting the processing strategy of the cached data packets in the cache queue based on the actual number K and the length K of the cache queue until the cache queue caches K data packets, and then processing the cached data packets in the cache queue according to the transmitted packet interval. The processing strategy of the data packet is adjusted according to the actual number and the length of the buffer queue, so that the speed of processing the data packet can be kept when the delay is jittered.
In a possible implementation manner of the first aspect, the adjusting a processing policy of a buffered data packet in the buffer queue based on the actual number K and the length K of the buffer queue includes: if the actual number K is smaller than the length K of the buffer queue, processing the data packets buffered in the buffer queue according to packet sending intervals, recording the lost data packets in the buffer queue, and retransmitting the lost data packets; if the actual number K is greater than the length K of the buffer queue, processing the data packets buffered in the buffer queue according to the target interval or carrying out packet loss processing on the data packets in the buffer queue; wherein the target interval is smaller than the packet sending interval; if the actual number k is 0, the corresponding data packet is processed immediately after the buffer queue buffers one data packet. According to the processing strategy of the data packet, the buffer queue can be quickly restored to the normal state, and the speed of processing the data packet can be kept when the time delay is prolonged by the receiving end.
In a possible implementation manner of the first aspect, the recording the lost data packet in the buffer queue, and retransmitting the lost data packet, includes: transmitting the lost data packet to a retransmission queue; if the sequence number of the data packet retransmitted in the retransmission queue is smaller than the sequence number of the data packet processed currently, the corresponding data packet is not retransmitted; if the sequence number of the data packet retransmitted in the retransmission queue is larger than the sequence number of the data packet processed currently, retransmitting the corresponding data packet. According to the method and the device, all lost data packets are not retransmitted, and only the data packets with the sequence numbers larger than that of the data packets processed currently are retransmitted, so that resources can be saved.
In a second aspect, there is provided an electronic device comprising: a communication module, a memory, and one or more processors; the communication module, the memory, and the processor are coupled; the memory is configured to store computer program code, the computer program code comprising computer instructions that, when executed by the electronic device, cause the electronic device to perform the cache queue adjustment method described above.
In a third aspect, there is provided a computer readable storage medium having instructions stored therein which, when run on a computer, cause the computer to perform the cache queue adjustment method of any one of the first aspects above.
In a fourth aspect, there is provided a computer program product comprising instructions which, when run on a computer, cause the computer to perform the cache queue adjustment method of any one of the first aspects above.
In a fifth aspect, there is provided an apparatus (e.g. the apparatus may be a system-on-a-chip) comprising a processor for supporting a first device to implement the functionality referred to in the first aspect above. In one possible design, the apparatus further includes a memory for holding program instructions and data necessary for the first device. When the device is a chip system, the device can be formed by a chip, and can also comprise the chip and other discrete devices.
The technical effects of any one of the design manners of the second aspect to the fifth aspect may be referred to the technical effects of the different design manners of the first aspect, and will not be repeated here.
Drawings
FIG. 1 is a schematic diagram of a conventional cache queue solution according to an embodiment of the present application;
fig. 2 is a schematic flow chart of a delay detection provided in an embodiment of the present application;
fig. 3 is a schematic hardware structure of an electronic device according to an embodiment of the present application;
fig. 4 is a schematic software structure diagram of acquiring a service type of a first service currently performed according to an embodiment of the present application;
fig. 5 is a schematic diagram of calculating transmission delay according to an embodiment of the present application;
fig. 6 is a schematic diagram of a frame structure according to an embodiment of the present application;
FIG. 7 is a schematic diagram of determining a desired cache queue according to an embodiment of the present disclosure;
FIG. 8 is a schematic diagram of states of a cache queue according to an embodiment of the present disclosure;
FIG. 9 is a schematic diagram of fast recovery of an abnormal state of a cache queue according to an embodiment of the present application;
fig. 10 is a schematic diagram of a buffer queue packet adjustment policy according to an embodiment of the present application;
fig. 11 is a schematic structural diagram of a chip system according to an embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are some, but not all, of the embodiments of the present application. All other embodiments, which can be made by one of ordinary skill in the art without undue burden from the present disclosure, are within the scope of the present disclosure.
It should be noted that the following terms "first", "second", etc. are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first", "a second", etc. may explicitly or implicitly include one or more such feature. Furthermore, the terms "comprise" and "have," as well as any variations thereof, are intended to cover a non-exclusive inclusion. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those listed steps or elements but may include other steps or elements not listed or inherent to such process, method, article, or apparatus.
Reference in the specification to "one embodiment" or "some embodiments" or the like means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the application. Thus, appearances of the phrases "in one embodiment," "in some embodiments," "in other embodiments," and the like in the specification are not necessarily all referring to the same embodiment, but mean "one or more but not all embodiments" unless expressly specified otherwise.
In the embodiments of the present application, words such as "exemplary" or "such as" are used to mean serving as examples, illustrations, or descriptions. Any embodiment or design described herein as "exemplary" or "for example" should not be construed as preferred or advantageous over other embodiments or designs. Rather, the use of words such as "exemplary" or "such as" is intended to present related concepts in a concrete fashion.
The user may use different stream type services in near field conditions. The streaming type service is real-time service such as audio and video playing performed in a streaming media mode. Streaming media refers to streaming media format, in which the entire file is not downloaded before playing, and media data is correctly output by downloading, buffering and playing. The stream type service is unidirectionally transmitted and comprises a transmitting end and a receiving end. The sending end transmits the data packet of the service to the receiving end, and the receiving end processes the received data packet, namely submits the data packet to the upper layer so as to execute the service.
Currently, multiple streaming services (such as a super-mouse, a super-call, a heterogeneous screen-casting, etc.) with different transmission requirements are supported, and if the services are transmitted by completely relying on an air interface network, the actual experience of the services sensitive to network jitter is poor, such as the scenes of the co-channel, the different-channel, the streaming services with external interference, etc. of the mouse.
Delay jitter is an important indicator affecting the quality of service (QoS) of a stream type service. Wherein delay jitter refers to delay variation. The packets leave the sender uniformly at regular intervals, but as they pass through the network, the uniform intervals are destroyed by the different delays experienced by the packets, thereby creating jitter. In some existing schemes, a cache queue scheme may be employed to combat network jitter.
Referring to fig. 1, fig. 1 is a schematic diagram of a buffer queue stream transmission model. As shown in fig. 1, the transmitting end transmits a data packet to the receiving end, so as to resist network jitter and provide the capability of order preservation and dynamic frame rate upwards, the receiving end does not immediately submit after receiving the packet, but inserts the data packet into a buffer queue first, and submits the data packet upwards after buffering a certain number of data packets.
Delay jitter can cause a receiving end to have a non-packet-receiving window period and a large number of packet-receiving window periods. And in a non-packet receiving window period caused by delay jitter, the receiving end sequentially submits the data packets in the buffer queue to the upper layer according to the packet sending interval S. As shown in fig. 1, the receiving end starts to enter a no-packet window period when receiving the fourth data packet, and at this time, the receiving end starts to submit the first data packet in the buffer queue. The time from the receiving end to the time when the first data packet is submitted to the buffer queue is called the first packet buffer delay. The packet delivery period R refers to the time interval between delivery of data packets by the receiving end, and is set to 4 milliseconds here. As shown in fig. 1, after the packet-free window period is over, the receiving end starts to enter a large number of packet-free window periods, and the receiving end pre-stores the received data packets into a buffer queue. At this time, the receiving end also buffers the data packet in the buffer queue before delivering the data packet according to the packet delivering period. Therefore, in the abnormal period of receiving packets of the receiving end caused by the delay jitter, the buffer queue of the receiving end can buffer the data packets, so that the delay jitter can not influence the delivery of the data packets by the receiving end.
However, in this scheme, the length of the buffer queue is relatively fixed. If the network environment and the service type change, delay jitter cannot be smoothed due to too short cache queue, or performance waste is caused due to too long cache queue.
In other existing schemes, a dynamic cache queue scheme may be employed to combat network jitter. Wherein, 3 intervals of good, medium and bad WIFI negotiation rate 433-1200 Mbps, 100-433 Mbps and 0-100 Mbps can be defined, and when the WIFI negotiation rate changes across the regions, the sending end sends a time delay detection frame to perform time delay detection. And then, according to the result of the time delay detection, adjusting the cache queue.
Referring to fig. 2, fig. 2 is a schematic diagram of a delay detection. As shown in fig. 2, the transmitting end sends a time delay detection frame to the receiving end, and the receiving end may immediately send back a time delay detection Acknowledgement (ACK) frame to the transmitting end after receiving the time delay detection frame. The sending end records the time when the time delay detection confirmation frame is received, and the difference between the time when the time delay detection frame is sent is Round Trip Time (RTT). And finally, the sending end transmits the delay notification frame carrying the RTT to the receiving end. Thus, a transmission delay, i.e., ts=rtt/2, can be obtained.
The user experience time delay can be obtained according to the service type and the current WIFI negotiation rate, and it can be known that the user experience time delay is the sum of the first packet buffering time delay and the transmission time delay, that is, tu=tc+rtt/2, because the packet sending interval of the streaming transmission is basically constant, that is, tu=kxs+rtt/2, the expected depth of the buffering queue can be obtained after transformation: k= (Tu-RTT/2)/S.
When the WIFI negotiation rate changes across regions, delay detection can be restarted, namely, transmission delay is recalculated, and then the expected depth of the buffer queue is obtained. And finally, adjusting the delivery strategy of the data packet according to the actual data and the length of the retrieved buffer queue.
However, in the above scheme, the delay detection frame is only sent when the network negotiation rate changes, so that network jitter and delay cannot be perceived in time, and further, the buffer depth is inaccurate, and jitter cannot be smoothed. And the buffer depth is determined only by service setting and network delay, and the length of the buffer queue cannot be adjusted according to the network jitter degree. And, the above scheme also introduces a certain cache delay when the network condition is good, resulting in performance waste.
In order to solve the above-mentioned problems, an embodiment of the present application provides a method for adjusting a buffer queue, which may detect a specific network jitter degree by acquiring a transmission delay between a receiving end and a transmitting end. And determining the buffer depth of the buffer queue together according to the service setting time delay, the transmission time delay and the network jitter degree, and finally determining the packet delivery strategy through the adjusted buffer queue depth.
The implementation of the examples of the present application will be described in detail below with reference to the accompanying drawings. The buffer queue adjusting method provided by the embodiment of the application is applied to a receiving end (also called an electronic device). In this embodiment of the present application, taking the above receiving end (i.e., the electronic device) as an example of a mobile phone, a hardware structure of the receiving end is described.
As shown in fig. 3, the electronic device 200 may include: processor 210, external memory interface 220, internal memory 221, universal serial bus (universal serial bus, USB) interface 230, charge management module 240, power management module 241, battery 242, antenna 1, antenna 2, mobile communication module 250, wireless communication module 260, audio module 270, speaker 270A, receiver 270B, microphone 270C, headset interface 270D, sensor module 280, keys 290, motor 291, indicator 292, camera 293, display 294, and subscriber identity module (subscriber identification module, SIM) card interface 295, among others.
The sensor module 280 may include pressure sensors, gyroscope sensors, barometric pressure sensors, magnetic sensors, acceleration sensors, distance sensors, proximity sensors, fingerprint sensors, temperature sensors, touch sensors, ambient light sensors, bone conduction sensors, and the like.
It is to be understood that the structure illustrated in this embodiment does not constitute a specific limitation on the electronic apparatus 200. In other embodiments, the electronic device 200 may include more or fewer components than shown, or certain components may be combined, or certain components may be split, or different arrangements of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
Processor 210 may include one or more processing units such as, for example: the processor 210 may include an Application Processor (AP), a modem processor, a Graphics Processor (GPU), an image signal processor (image signal processor, ISP), a controller, a memory, a video codec, a digital signal processor (digital signal processor, DSP), a baseband processor, and/or a neural-Network Processing Unit (NPU), etc. Wherein the different processing units may be separate devices or may be integrated in one or more processors.
The controller may be a neural hub and command center of the electronic device 200. The controller can generate operation control signals according to the instruction operation codes and the time sequence signals to finish the control of instruction fetching and instruction execution.
A memory may also be provided in the processor 210 for storing instructions and data. In some embodiments, the memory in the processor 210 is a cache memory. The memory may hold instructions or data that the processor 210 has just used or recycled. If the processor 210 needs to reuse the instruction or data, it may be called directly from the memory. Repeated accesses are avoided and the latency of the processor 210 is reduced, thereby improving the efficiency of the system.
In some embodiments, processor 210 may include one or more interfaces. The interfaces may include an integrated circuit (inter-integrated circuit, I2C) interface, an integrated circuit built-in audio (inter-integrated circuit sound, I2S) interface, a pulse code modulation (pulse code modulation, PCM) interface, a universal asynchronous receiver transmitter (universal asynchronous receiver/transmitter, UART) interface, a mobile industry processor interface (mobile industry processor interface, MIPI), a general-purpose input/output (GPIO) interface, a subscriber identity module (subscriber identity module, SIM) interface, and/or a universal serial bus (universal serial bus, USB) interface, among others.
It should be understood that the connection relationship between the modules illustrated in this embodiment is only illustrative, and does not limit the structure of the electronic device 200. In other embodiments, the electronic device 200 may also employ different interfaces in the above embodiments, or a combination of interfaces.
The charge management module 240 is configured to receive a charge input from a charger. The charger can be a wireless charger or a wired charger. In some wired charging embodiments, the charge management module 240 may receive a charging input of a wired charger through the USB interface 230. In some wireless charging embodiments, the charge management module 240 may receive wireless charging input through a wireless charging coil of the electronic device 200. The charging management module 240 may also provide power to the electronic device through the power management module 241 while charging the battery 242.
The power management module 241 is used for connecting the battery 242, and the charge management module 240 and the processor 210. The power management module 241 receives input from the battery 242 and/or the charge management module 240 and provides power to the processor 210, the internal memory 221, the external memory, the display 294, the camera 293, the wireless communication module 260, and the like. The power management module 241 may also be configured to monitor battery capacity, battery cycle times, battery health (leakage, impedance), and other parameters. In other embodiments, the power management module 241 may also be disposed in the processor 210. In other embodiments, the power management module 241 and the charge management module 240 may be disposed in the same device.
The wireless communication function of the electronic device 200 can be implemented by the antenna 1, the antenna 2, the mobile communication module 250, the wireless communication module 260, a modem processor, a baseband processor, and the like.
The antennas 1 and 2 are used for transmitting and receiving electromagnetic wave signals. Each antenna in the electronic device 200 may be used to cover a single or multiple communication bands. Different antennas may also be multiplexed to improve the utilization of the antennas. For example: the antenna 1 may be multiplexed into a diversity antenna of a wireless local area network. In other embodiments, the antenna may be used in conjunction with a tuning switch.
The mobile communication module 250 may provide a solution for wireless communication including 2G/3G/4G/5G, etc., applied on the electronic device 200. The mobile communication module 250 may include at least one filter, switch, power amplifier, low noise amplifier (low noise amplifier, LNA), etc. The mobile communication module 250 may receive electromagnetic waves from the antenna 1, perform processes such as filtering, amplifying, and the like on the received electromagnetic waves, and transmit the processed electromagnetic waves to the modem processor for demodulation.
The mobile communication module 250 can amplify the signal modulated by the modem processor, and convert the signal into electromagnetic waves through the antenna 1 to radiate. In some embodiments, at least some of the functional modules of the mobile communication module 250 may be disposed in the processor 210. In some embodiments, at least some of the functional modules of the mobile communication module 250 may be provided in the same device as at least some of the modules of the processor 210.
The modem processor may include a modulator and a demodulator. The modulator is used for modulating the low-frequency baseband signal to be transmitted into a medium-high frequency signal. The demodulator is used for demodulating the received electromagnetic wave signal into a low-frequency baseband signal. The demodulator then transmits the demodulated low frequency baseband signal to the baseband processor for processing. The low frequency baseband signal is processed by the baseband processor and then transferred to the application processor. The application processor outputs sound signals through an audio device (not limited to speaker 270A, receiver 270B, etc.), or displays images or video through display screen 294. In some embodiments, the modem processor may be a stand-alone device. In other embodiments, the modem processor may be provided in the same device as the mobile communication module 250 or other functional module, independent of the processor 210.
The wireless communication module 260 may provide solutions for wireless communication including wireless local area network (wireless local areanetworks, WLAN) (e.g., wireless fidelity (wireless fidelity, wi-Fi) network), bluetooth (BT), global navigation satellite system (global navigation satellite system, GNSS), frequency modulation (frequency modulation, FM), near field wireless communication technology (near field communication, NFC), infrared technology (IR), etc., as applied to the electronic device 300.
The wireless communication module 260 may be one or more devices that integrate at least one communication processing module. The wireless communication module 260 receives electromagnetic waves via the antenna 2, modulates the electromagnetic wave signals, filters the electromagnetic wave signals, and transmits the processed signals to the processor 210. The wireless communication module 260 may also receive a signal to be transmitted from the processor 210, frequency modulate it, amplify it, and convert it to electromagnetic waves for radiation via the antenna 2.
In some embodiments, antenna 1 and mobile communication module 250 of electronic device 200 are coupled, and antenna 2 and wireless communication module 260 are coupled, such that electronic device 200 may communicate with a network and other devices via wireless communication techniques. The wireless communication techniques may include the Global System for Mobile communications (global system for mobile communications, GSM), general packet radio service (general packet radio service, GPRS), code division multiple access (code division multiple access, CDMA), wideband code division multiple access (wideband code division multiple access, WCDMA), time division code division multiple access (time-division code division multiple access, TD-SCDMA), long term evolution (long term evolution, LTE), BT, GNSS, WLAN, NFC, FM, and/or IR techniques, among others. The GNSS may include a global satellite positioning system (global positioning system, GPS), a global navigation satellite system (global navigation satellite system, GLONASS), a beidou satellite navigation system (beidou navigation satellite system, BDS), a quasi zenith satellite system (quasi-zenith satellite system, QZSS) and/or a satellite based augmentation system (satellite based augmentation systems, SBAS).
The electronic device 200 implements display functions through a GPU, a display screen 294, an application processor, and the like. The GPU is a microprocessor for image processing, and is connected to the display screen 294 and the application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. Processor 210 may include one or more GPUs that execute program instructions to generate or change display information.
The display 294 is used to display images, videos, and the like. The display 294 includes a display panel. The display panel may employ a liquid crystal display (liquid crystal display, LCD), an organic light-emitting diode (OLED), an active-matrixorganic light emittingdiode (AMOLED), a flexible light-emitting diode (FLED), a Miniled, a Micro-OLED, a quantum dot light-emitting diode (quantum dot light emitting diodes, QLED), or the like.
The electronic device 200 may implement a photographing function through an ISP, a camera 293, a video codec, a GPU, a display 294, an application processor, and the like.
The ISP is used to process the data fed back by the camera 293. For example, when photographing, the shutter is opened, light is transmitted to the camera photosensitive element through the lens, the optical signal is converted into an electric signal, and the camera photosensitive element transmits the electric signal to the ISP for processing and is converted into an image visible to naked eyes. ISP can also optimize the noise, brightness and skin color of the image. The ISP can also optimize parameters such as exposure, color temperature and the like of a shooting scene. In some embodiments, the ISP may be provided in the camera 293.
The camera 293 is used to capture still images or video. The object generates an optical image through the lens and projects the optical image onto the photosensitive element. The photosensitive element may be a charge coupled device (charge coupled device, CCD) or a Complementary Metal Oxide Semiconductor (CMOS) phototransistor. The photosensitive element converts the optical signal into an electrical signal, which is then transferred to the ISP to be converted into a digital image signal. The ISP outputs the digital image signal to the DSP for processing. The DSP converts the digital image signal into an image signal in a standard RGB, YUV, or the like format. In some embodiments, the electronic device 200 may include 1 or N cameras 293, N being a positive integer greater than 1.
The digital signal processor is used for processing digital signals, and can process other digital signals besides digital image signals. For example, when the electronic device 200 is selecting a frequency bin, the digital signal processor is used to fourier transform the frequency bin energy, or the like.
Video codecs are used to compress or decompress digital video. The electronic device 200 may support one or more video codecs. In this way, the electronic device 200 may play or record video in a variety of encoding formats, such as: dynamic picture experts group (moving picture experts group, MPEG) 1, MPEG2, MPEG3, MPEG4, etc.
The NPU is a neural-network (NN) computing processor, and can rapidly process input information by referencing a biological neural network structure, for example, referencing a transmission mode between human brain neurons, and can also continuously perform self-learning. Applications such as intelligent cognition of the electronic device 200 may be implemented by the NPU, for example: image recognition, face recognition, speech recognition, text understanding, etc.
The external memory interface 220 may be used to connect an external memory card, such as a Micro SD card, to enable expansion of the memory capabilities of the electronic device 200. The external memory card communicates with the processor 210 through an external memory interface 220 to implement data storage functions. For example, files such as music, video, etc. are stored in an external memory card.
Internal memory 221 may be used to store computer executable program code that includes instructions. The processor 210 executes various functional applications of the electronic device 200 and data processing by executing instructions stored in the internal memory 221. For example, in an embodiment of the present application, the processor 210 may include a memory program area and a memory data area by executing instructions stored in the internal memory 221.
The storage program area may store an application program (such as a sound playing function, an image playing function, etc.) required for at least one function of the operating system, etc. The storage data area may store data created during use of the electronic device 200 (e.g., audio data, phonebook, etc.), and so on. In addition, the internal memory 221 may include a high-speed random access memory, and may further include a nonvolatile memory such as at least one magnetic disk storage device, a flash memory device, a universal flash memory (universal flash storage, UFS), and the like.
The electronic device 200 may implement audio functions through an audio module 270, a speaker 270A, a receiver 270B, a microphone 270C, an ear-headphone interface 270D, an application processor, and the like. Such as music playing, recording, etc.
Touch sensors, also known as "touch panels". The touch sensor may be disposed on the display screen 294, and the touch sensor and the display screen 294 form a touch screen, which is also called a "touch screen". The touch sensor is used to detect a touch operation acting on or near it. The touch sensor may communicate the detected touch operation to the application processor to determine the touch event type. Visual output related to touch operations may be provided through the display 294. In other embodiments, the touch sensor may also be disposed on a surface of the electronic device 200 at a different location than the display 294.
In this embodiment, the electronic device 200 may detect a touch operation input by a user on the touch screen through the touch sensor, and collect one or more of a touch position, a touch area, a touch direction, a touch time and the like of the touch operation on the touch screen. In some embodiments, the electronic device 200 may determine the touch location of a touch operation on the touch screen by combining a touch sensor and a pressure sensor. In this embodiment of the present application, the electronic device 200 may detect a touch operation input by a user on the touch screen through the touch sensor, and determine a service button corresponding to a touch position of the touch operation on the touch screen, so as to initiate a first service.
Keys 290 include a power on key, a volume key, etc. The keys 290 may be mechanical keys. Or may be a touch key. The electronic device 200 may receive key inputs, generating key signal inputs related to user settings and function controls of the electronic device 200.
The motor 291 may generate a vibration alert. The motor 291 may be used for incoming call vibration alerting or for touch vibration feedback. For example, touch operations acting on different applications (e.g., photographing, audio playing, etc.) may correspond to different vibration feedback effects. The motor 291 may also correspond to different vibration feedback effects by touch operations applied to different areas of the display 294. Different application scenarios (such as time reminding, receiving information, alarm clock, game, etc.) can also correspond to different vibration feedback effects. The touch vibration feedback effect may also support customization.
The indicator 292 may be an indicator light, which may be used to indicate a state of charge, a change in power, a message indicating a missed call, a notification, etc. The SIM card interface 295 is for interfacing with a SIM card. The SIM card may be inserted into the SIM card interface 295 or removed from the SIM card interface 295 to enable contact and separation from the electronic device 200. The electronic device 200 may support 1 or N SIM card interfaces, N being a positive integer greater than 1. The SIM card interface 295 may support Nano SIM cards, micro SIM cards, and the like.
The methods in the following embodiments may be implemented in the electronic device 200 having the above-described hardware structure.
The buffer queue adjusting method provided by the embodiment of the application is applied to the receiving end. The receiving end establishes wireless connection with the transmitting end, and the receiving end is used for receiving and caching the data packet of the first service from the transmitting end and then processing the data packet so as to execute the first service. Wherein the first traffic may be a streaming type traffic, e.g. a heterogeneous drop. In the heterogeneous screen projection, the screen projection device (transmitting end) transmits the data (data packet) to be projected to the screen projection device (receiving end), and the screen projection device processes the received data (data packet) to be projected. The method for adjusting the cache queue provided by the embodiment of the application can comprise S301-S306.
S301, a receiving end responds to the initiating operation of a first service, and the receiving end obtains the current target service type of the first service.
For example, an interface of the receiving end is provided with service buttons of each service, and the receiving end responds to touch operation of a user on the service buttons to acquire service types of the corresponding services of the service buttons. The first service may also be initiated by the transmitting end. Whether the sending end or the receiving end initiates the first service, the receiving end can start to execute the method of the embodiment of the application after the first service is initiated, and the service type of the first service is obtained.
Referring to fig. 4, fig. 4 is a schematic software structure diagram of acquiring a service type of a currently performed first service according to an embodiment of the present application. As shown in fig. 4, an upper layer (such as an application layer) of the receiving end acquires a service type of a first service currently performed, and transmits the service type to a communication module of the receiving end, and the communication module transmits the service type to a protocol stack service layer where the receiving end is located. Therefore, the receiving end determines the service type of the first service currently performed according to the information transferred by the upper layer.
The upper layer of the receiving end can transmit the identification information to the communication module by determining the identification information corresponding to the service type. The communication module comprises a SENDSTREAM module, a SENSVII DEO module and the like. After receiving the identification information, a module corresponding to the service type in the communication module transmits the identification information to a protocol stack service layer where the receiving end is located. For example, the upper layer determines that the first service currently performed is a super mouse service, and obtains identification information (e.g. 001) corresponding to the super mouse service. The upper layer transmits the identification information (001) to the communication module, and the communication module transmits the identification information (001) to the protocol stack service layer where the receiving end is located. And the protocol stack service layer where the receiving end is positioned determines the corresponding service type according to the identification information (001). The identification information corresponding to the service type is pre-stored in the receiving end, and the identification information can be set in other forms.
After determining the service type of the first service, the receiving end may determine a service setting delay corresponding to the service type, that is, execute step S302.
S302, the receiving end determines service setting time delay corresponding to the target service type.
In this embodiment of the present application, the service setting delay refers to the time that it takes for different service settings to expect to process a corresponding data packet from a sending end to a receiving end.
The service set time delay comprises a minimum fixed value and a maximum fixed value, namely, the minimum expected time delay and the maximum expected time delay which are set by the service under different network conditions according to service requirements corresponding to different service types. For example, if a call service is a service that is less sensitive to delay, the service may set a maximum fixed value of delay to be a little larger, for example, 100 milliseconds. If the service sensitive to the time delay is to be dropped, the maximum fixed value of the time delay of the service needs to be set smaller, for example, 50 milliseconds.
The service setting delay may also be set to 0, for example, the service may have a low delay requirement, and it is considered that the buffer depth does not need to be set, and then the minimum fixed value and the maximum fixed value of the service setting delay setting corresponding to the service may be set to 0.
It can be understood that the service setup delay is the sum of the transmission delay of the data packet from the transmitting end to the receiving end and the service buffering delay. After the receiving end obtains the service type of the first service, only the minimum fixed value and the maximum fixed value of the service setting delay can be determined, and the target value of the service setting delay is not determined. Only after the current transmission delay is calculated later, the final target value of the service setting delay can be further determined, and the final service buffering delay is further obtained.
The service setting delay is dynamically adjusted according to the calculated transmission delay, and it can be understood that the transmission delay can represent the network condition to a certain extent, and the smaller the transmission delay, the better the network condition. That is, the minimum fixed value of the service set delay is the service expected delay corresponding to the service in the better network condition, and the maximum fixed value is the service expected delay corresponding to the service in the worse network condition.
Because of the limitation of the service setting delay, even if the network condition is very good, that is, the transmission delay is very small, the service may also need to have a certain buffering delay, so when the transmission delay is smaller than the delay when the service is set, the service setting delay is determined to be the minimum fixed value. And, even if the network condition is worse, that is, the transmission delay is relatively large, the service cannot increase the buffering delay without limit, so when the transmission delay is not less than the delay of the service setting, the service setting delay is determined to be the maximum fixed value. If the transmission delay is greater than the maximum fixed value, the service buffering delay is 0 due to the limitation of the maximum fixed value.
For example, if the minimum fixed value of the service setting delay is set to 30 ms, and the maximum fixed value of the service setting delay is set to 50 ms, then if the calculated transmission delay is less than 30 ms, then the service setting delay is set to 30 ms. If the calculated transmission delay is greater than or equal to 30 ms, the service setting delay is 50 ms.
The receiving end can pre-store service setting time delay corresponding to a plurality of service types, namely a maximum fixed value and a minimum fixed value corresponding to the plurality of service types respectively.
In this embodiment of the present application, after the receiving end responds to the user's initiation operation of the first service, the sending end also starts to send the data packet of the first service to the receiving end in sequence, where the receiving end may acquire, in real time, the transmission delay of the data packet from the sending end to the receiving end. That is, step S303 is performed.
S303, the receiving end acquires the current first transmission delay of the receiving end and the sending end.
In the embodiment of the present application, after the receiving end responds to the initiation operation of the first service, the sending end and the receiving end can perform timing interaction to transmit the completed Acknowledgement (ACK) frame, so that the real-time transmission delay of the streaming service data frame can be accurately calculated in real time.
The timing interaction between the sending end and the receiving end transmits a completion acknowledgement frame, that is, the receiving end will send a signaling frame to the sending end at a timing, that is, a transmission completed (TransferDone) frame. After receiving the transmission completion frame, the transmitting end returns a transmission completion acknowledgement frame to the receiving end. The transmission completed frame carries a delay confirmation waiting time D1 of the receiving end, that is, the waiting time of the receiving end from the first data packet to the last data packet in a preset time period. The receiving end may record a time stamp when receiving the first data packet, i.e. record the time 1 at this time, and record the time 2 at this time when the receiving end sends the frame with completed transmission, so as to calculate the delay confirmation waiting time D1 of the receiving end, i.e. the difference between the time 2 and the time 1.
After receiving the frame with completed transmission, the sending end may calculate the waiting time D2 of the sending end, that is, the time from sending the first data packet to receiving the frame with completed transmission. Specifically, the sending end marks a time stamp when sending the first data packet, that is, records the moment 3 at this moment, and records the moment 4 at this moment when receiving the transmission completion frame at the sending end, so as to calculate and obtain the waiting time D2 of the sending end, that is, the difference between the moment 4 and the moment 3.
At this time, the sender may subtract the delay confirmation waiting time D1 of the receiver according to the waiting time D2 of the sender to obtain the sum of the transmission delay S1 of the data packet from the sender to the receiver and the transmission delay S2 of the transmitted frame from the receiver to the sender, and set the sum as the first round trip delay S3, that is, s3=s1+s2=d2-D1.
The receiving end also records the time 5 at this time after receiving the transmission completion acknowledgement frame. The receiving end subtracts the time 5 from the time 2 when the receiving end sends the frame with completed transmission to obtain the Round Trip Time (RTT) of the signaling frame of the receiving end, and sets the second round trip time. Since the sizes of the transmission completed frame and the transmission completed acknowledgement frame are the same, and the respective transmission delays are the same, the receiving end can calculate the transmission delay s2=rtt/2 of the transmission completed frame.
When the transmitting end sends the acknowledgement frame with completed transmission to the receiving end, the acknowledgement frame may include the sum of the transmission delay S1 of the data packet calculated by the transmitting end and the transmission delay S2 of the frame with completed transmission, i.e. the first round trip delay S3. Finally, the receiving end can calculate the transmission delay S1=D2-D1-S2=D2-D1-RTT/2 of the data packet.
In some embodiments of the present application, after calculating the sending-end waiting time, the sending end may send a transmission completed acknowledgement frame to the receiving end, where the acknowledgement frame includes the sending-end waiting time D2. After receiving the acknowledgement frame with completed transmission, the receiving end calculates the transmission delay S1 of the data packet from the transmitting end to the receiving end according to the waiting time D2 of the transmitting end contained in the acknowledgement frame, the delay acknowledgement waiting time D1 of the receiving end calculated by the receiving end and the transmission delay S2 of the transmission completed frame. That is, the transmission delay of the data packet from the transmitting end to the receiving end is the transmission delay s1=d2-D1-S2 of the transmission completed frame from the receiving end to the transmitting end, which is the transmission delay of the data packet from the transmitting end to the receiving end minus the delay acknowledgement waiting time of the receiving end.
As shown in fig. 5, the transmitting end sequentially transmits data packets to the receiving end, and the receiving end transmits a transmission completed frame to the transmitting end after receiving 10 data packets. After receiving the transmission completed frame, the transmitting end returns a transmission completed acknowledgement frame to the receiving end. From fig. 5, it can be known that the sending end waiting time D1, the receiving end delay confirmation waiting time D2, the transmission delay S2 of the transmission completed frame, the relationship between the transmission delay S3 of the transmission completed confirmation frame and the transmission delay S1 of the data packet, that is, d1=s1+d2+s2, where s2=s3.
As shown in fig. 6, the frame structure of the transmission completed acknowledgement frame includes a frame header, a number of transmission units (tansfer unit), and a number of transmission information (tansfer info). The frame header includes a type (type), a flag (flag), a session identifier (session), a transmission identifier (transaction id), and a length (length). The transmission unit includes a type (type), a length (len) and a transmission identification (tranid). The transmission information includes a type (type), a length (len) and a reception-side delay acknowledgement waiting time (Waittime). The receiving end delays the acknowledgement waiting time to be an unused field, so that the transmission delay can be carried through the field. The transmission delay herein refers to the first round trip delay, i.e. the sum of the transmission delay from the receiving end to the transmitting end of the transmission completed frame and the transmission delay from the transmitting end to the receiving end of the data packet.
In the embodiment of the application, after the receiving end responds to the initiation operation of the user on the first service, the receiving end sends the transmission completed frame to the sending end every preset time length. For example, the preset duration may be 200 ms, and after the receiving end responds to the user's initiation operation of the first service, the receiving end sends the transmission completed frame to the transmitting end every 200 ms. That is, the receiving end recalculates the current transmission delay once every 200 ms, and the calculated transmission delay is the transmission delay of the service data packet of the first service. Therefore, the receiving end can accurately detect the current transmission delay of the data packet of the first service in real time. In addition, the data frame is not required to be additionally added to detect the transmission delay, so that the efficiency can be reduced, and the electric quantity can be saved.
S304, the receiving end determines jitter buffer time delay based on the jitter indication value of the first transmission time delay.
In the embodiment of the present application, the jitter degree of the transmission delay in the preset duration may also be reflected by calculating the average dispersion σ of the transmission delay in the preset duration. Wherein, the calculation formula of the average dispersion sigma is as follows:
wherein n is calculated within a preset time periodThe number of times of the outgoing transmission delay, Is the average of n transmission delays. The difference between the transmission delay and the average value at each time is the dispersion, and the average dispersion sigma is the average value of the absolute values of the dispersion. In the embodiment of the application, the receiving end determines the calculated average dispersion of the transmission delay as the jitter indication value of the transmission delay, that is, the jitter degree of the transmission delay in the preset duration can be reflected.
For example, the transmission delay calculated every 4 seconds is recorded, and if the transmission completed frame is calculated every 200 ms, n is 20. Recording 20 calculated transmission delays, and calculating the average dispersion sigma of the transmission delays according to the average value of the transmission delays of each time and 20 transmission delays.
The transmission delay is calculated at fixed time, so that the average dispersion sigma of the transmission delay is calculated, and the network jitter degree can be perceived in time. Wherein, the larger the average dispersion, the stronger the network jitter is represented; the smaller the average dispersion, the weaker the network jitter.
In the embodiment of the application, the average dispersion of the transmission delay is not limited to be determined as the jitter indication value of the transmission delay, and the jitter degree of the transmission delay is only required to be accurately reflected. In some embodiments, the jitter indication value may also be determined as the transmission delay by the standard deviation of the calculated average value of the transmission delay.
In the embodiment of the application, the service can also perceive the frequency point information of the receiving end device and the transmitting end device, namely the frequency band characteristic information. The frequency band characteristic information is used for indicating the relation between the frequency points of the receiving end and the sending end and the channels, and comprises the same frequency and same channel, different frequency and same channel and different frequency and different channel. The dithering coefficient can be adjusted according to the sensed double-ended frequency band characteristic information. For example, when the transmitting end and the receiving end share the same frequency and the same channel, the jitter coefficient p1 may be 0; when the transmitting end and the receiving end are in the same frequency and different channel, the jitter coefficient p2 can be 3; when the transmitting end and the receiving end are in different frequency and different channel, the jitter coefficient p3 can be 6. The dithering coefficients p1, p2 and p3 can also be adjusted, but p1< p2< p3 are guaranteed.
It will be appreciated that the co-channel will have a similar network jitter due to the antenna time division transmission itself, and that the jitter coefficient will be slightly larger than that of the co-channel. The dither coefficient is then greater for different frequency channels than for the same frequency channel.
After the network jitter degree and the jitter coefficient are obtained, the jitter buffer depth, that is, the jitter buffer delay T3, can be determined by the jitter coefficient and the jitter degree together. Wherein jitter buffer delay t3=jitter coefficient p, jitter degree σ (average dispersion).
In this embodiment of the present application, if the calculated jitter buffer depth T3 is smaller than the packet sending interval S, the jitter buffer may not be introduced. It can be understood that if the calculated jitter buffer depth T3 is smaller than the packet sending interval S, the network condition is considered to be relatively good, the network jitter is relatively weak, and jitter buffer may not be introduced. Therefore, when the network state is good, jitter buffer memory is not introduced, and performance waste is not caused. The receiving end can determine the service buffering time delay only according to the service requirement, namely the service setting time delay, and further adjust the length of the buffering queue.
The service also sets a maximum limit for jitter buffer delay, that is, the service sets a maximum jitter buffer delay under the network jitter condition, and the jitter buffer delays corresponding to different service types may be different.
In this embodiment of the present application, a minimum fixed value, a maximum fixed value, and the maximum jitter buffer delay corresponding to a service set delay may be used as delay information corresponding to the service. The receiving end can pre-store time delay information of various service types.
The receiving end can save the time delay information of the multiple service types in a table mode. As shown in table 1:
TABLE 1
After the receiving end obtains the service type of the first service, the receiving end can query the minimum fixed value and the maximum fixed value of the service setting delay corresponding to the first service and the maximum jitter buffer depth from the pre-stored delay information of the service type of the first service.
It should be noted that, after the receiving end obtains the service type of the first service, only the parameter information corresponding to the service type may be determined, including the minimum fixed value and the maximum fixed value of the service setting delay, and the maximum jitter buffer depth. Only after the transmission delay of the data packet is calculated, the target value of the service setting delay can be further determined, and the jitter indication value and the jitter buffer delay are determined according to the transmission delay.
After the receiving end obtains the minimum fixed value and the maximum fixed value of the service setting delay, the transmission delay and the jitter buffer delay, the receiving end can adjust the length of the buffer queue, that is, execute step S305.
S305, the receiving end adjusts the length K of the buffer queue based on the service setting time delay, the first transmission time delay, the jitter buffer time delay and the packet sending interval of the data packet sent by the sending end.
In this embodiment of the present application, after the current transmission delay (first transmission delay) is calculated, the final service setup delay may be determined, and then the service buffering delay is further calculated. The service buffering delay is equal to the service setup delay minus the first transmission delay.
After the service buffering delay and the jitter buffering delay are calculated, the target buffering delay can be obtained. The target buffering delay obtained here is the final first packet buffering delay, that is, the time from the receiving end to the time when the first data packet is submitted to the buffering queue. The first packet buffer delay Tc is the product of the length K of the buffer queue and the packet interval S, i.e., tc=k×s.
The service setting time delay is set to be T1, the first transmission time delay is set to be T2 (i.e. S1), and the jitter buffer time delay is set to be T3. That is, the following equation holds: tc=t1-t2+t3=k×s. From this calculation, the length k= (T1-t2+t3)/S of the cache queue is known. The length K of the buffer queue indicates that K packets are expected to be buffered in the buffer queue.
In this embodiment of the present application, when the first service just initiates, the receiving end has not calculated the transmission delay yet, and at this time, the transmission delay may be considered to be 0, and the jitter indication value of the transmission delay is also 0, that is, the jitter buffer delay is 0. According to the relation between the transmission delay and the service setting delay, the service setting delay at the moment can be determined to be the minimum fixed value. Then it can be seen that the length of the cache queue, K 0 =t1/S. That is, when the first service just originates, the initial length of the corresponding buffer queue.
And then, after the receiving end calculates the transmission time delay, adjusting the service setting time delay according to the real-time transmission time delay, determining the jitter buffer time delay, and readjusting the buffer queue.
Referring to fig. 7, fig. 7 is a schematic diagram of determining a desired cache queue according to an embodiment of the present application. After the first service is initiated, the receiving end sends a frame with completed transmission to the sending end at fixed time and receives a confirmation frame with completed transmission from the sending end, so that the transmission delay of the data packet of the first service is calculated in real time. As shown in fig. 7, when the receiving end receives the second data packet, the receiving end sends a frame with completed transmission to the sending end, that is, the duration between the time when the first service is initiated and the time when the receiving end receives the second data packet is the preset duration of the frame with completed transmission set by the receiving end.
In fig. 7, the first packet buffering delay is the sum of a service buffering delay and a jitter buffering delay, the determined value of the service buffering delay is determined according to the service setting delay corresponding to the service type of the first service and the calculated transmission delay, and the jitter buffering delay is calculated according to the calculated jitter degree and the frequency band characteristic information of the transmission delay. Therefore, the length of the buffer queue is adjusted according to the first packet buffer delay, namely, the length of the buffer queue is adjusted according to the network jitter degree and the service setting delay.
The packet interval of the sending end sending the data packet may refer to an average value of each packet interval of the sending end sending each data packet. As shown in fig. 7, the packet interval S may be an average of S1 and S2.
In the embodiment of the application, the length of the buffer queue can be adjusted at the beginning of the service instead of being based on the fixed buffer queue length, so that a better delay jitter resisting effect can be achieved.
After S305, the length of the buffer queue in the receiving end has been adjusted based on the service set delay, the first transmission delay, and the jitter buffer delay. However, the actual number of packets currently buffered in the buffer queue is not necessarily the same as the number of packets that can be buffered in the buffer queue.
As shown in fig. 8, in an actual scenario, the current cache queue may include the following states. State 1: under the condition of good network state, the receiving end uniformly receives packets. At this time, the length k=k of the buffer queue. State 2: in the case of network jitter, the receiving end does not receive packets. At this time, the length K of the cache queue is < K. State 3: in the case of network jitter, the receiving end receives a large amount of packets in a short time. At this time, the length K of the cache queue > K. State 4: the length of the buffer queue k=0. State 5: the length k of the buffer queue=maximum.
Based on the method, the receiving end can also adjust the processing strategy of the data packet according to the actual number and the target length of the buffer queue so as to ensure that the receiving end can keep the speed of processing the data packet when the delay is jittered.
Specifically, after adjusting the length of the cache queue, the method in the embodiment of the present application may further include S306.
S306, the receiving end acquires the actual number K of the data packets currently cached in the cache queue, and adjusts the processing strategy of the cached data packets in the cache queue based on the actual number K and the length K of the cache queue until the data packets cached in the cache queue are cached, and then the data packets cached in the cache queue are processed according to the packet sending interval.
In this embodiment of the present application, different data packet processing policies are stored in the receiving end, and are determined according to the actual number K of the current buffered data packets and the length K of the buffer queue to be adjusted.
And S306a, if the actual number K is smaller than the length K of the buffer queue, the receiving end processes the data packets buffered in the buffer queue according to the packet sending interval, records the lost data packets in the buffer queue, and retransmits the lost data packets.
S306b, if the actual number K is greater than the length K of the buffer queue, the receiving end processes the data packet buffered in the buffer queue according to the target interval or performs packet loss processing on the data packet in the buffer queue; wherein the target interval is less than the hair pack interval.
S306c, if the actual number k is 0, the receiving end immediately processes the corresponding data packet after caching one data packet in the cache queue.
And if the buffer queue of the receiving end receives a data packet within the preset time length, immediately submitting the data packet. If the data packet is not received within the preset time period, the buffer memory can be restarted.
Referring to fig. 9, fig. 9 is a schematic diagram of fast recovery of an abnormal state of a cache queue according to an embodiment of the present application. As shown in fig. 9, for state 1, if the actual number K of data packets in the buffer queue is equal to K, the receiving end still delivers packets uniformly. For the state 2, that is, the step S306a is corresponding to the above, the details are not repeated here. For the state 3, that is, the corresponding step S306b is not described herein. For the state 3, that is, the step S306c is corresponding to the above, the details are not repeated here.
For state 4, when the receiving end cannot receive the packet for a long time, and the actual number k of the data packets in the buffer queue is equal to 0, a preset duration may be set, for example, 10 seconds, 8 seconds, etc. Taking 10 seconds as an example, within 10 seconds, if a buffer queue at the receiving end receives a data packet, the data packet is immediately submitted. If no data packet is received within 10 seconds, the buffering may be restarted.
For the state 5, the receiving end receives a large amount of packets for a long time, when the actual number k of the data packets in the buffer queue reaches the maximum value of the number of the data packets which can be accommodated in the buffer queue, the receiving end immediately submits the clothes moth data packets if receiving the clothes moth data packets.
Therefore, based on the processing strategy of the data packet, the buffer queue is quickly restored to the normal state, and the speed of processing the data packet can be kept when the time delay is prolonged by the receiving end.
For example, referring to fig. 10, fig. 10 is a schematic diagram of a buffer queue packet adjustment strategy according to an embodiment of the present application. As shown in fig. 10, if the actual number K of the data packets currently buffered in the buffer queue is smaller than the length K of the buffer queue, the data packets buffered in the buffer queue are processed according to the packet sending interval, and the lost data packets in the buffer queue are recorded, and the lost data packets are retransmitted (i.e. S306a is executed).
The receiving end sends the lost data packet to the retransmission queue, but not all the data packets in the retransmission queue need to be retransmitted. If the sequence number of the data packet retransmitted in the retransmission queue is smaller than the sequence number of the data packet currently processed, the corresponding data packet is not retransmitted any more, so that resource waste is avoided. If the sequence number of the data packet retransmitted in the retransmission queue is larger than the sequence number of the data packet processed currently, retransmitting the corresponding data packet.
For example, if the length K of the buffer queue is 6, the actual number K of the current buffered data packets is 4, the current buffered data packets are data packet 1, data packet 3, data packet 5 and data packet 6, and the lost data packets in the buffer queue are data packet 2 and data packet 4. And sending the data packet 2 and the data packet 4 to a retransmission queue, and if the serial number of the data packet which is currently processed is 3, retransmitting the data packet 2 and retransmitting only the data packet 4.
As shown in fig. 10, if the actual number K of the data packets currently buffered in the buffer queue is greater than the length K of the buffer queue, the receiving end processes the data packets buffered in the buffer queue according to the target interval or performs packet loss processing on the data packets in the buffer queue (i.e. executing S306 b). The target interval is smaller than the packet sending interval, namely, the data packets in the buffer queue are accelerated to be submitted until the actual number K of the data packets currently buffered is equal to the length K of the buffer queue.
For example, if the length K of the buffer queue is 6 and the actual number K of the packets currently buffered is 8, if the current packet transmission interval is 4ms, the target interval for processing the packets may be set to be 3ms. Or, carrying out packet loss processing on two data packets in the current cached data packets so that the actual number K of the current cached data packets is equal to the length K of the cache queue.
As shown in fig. 10, if the actual number k of the packets currently buffered in the buffer queue is 0, the receiving end immediately processes the corresponding packet after buffering one packet in the buffer queue (i.e. executing S306 c). That is, the data packet previously buffered in the buffer queue has been processed, and at this time, the receiving end immediately processes the corresponding data packet as long as a new data packet is added to the buffer queue.
In summary, the buffer queue adjustment method provided by the embodiment of the present application may detect a specific network jitter degree according to a transmission delay of a data packet, and dynamically adjust a buffer queue length according to the network jitter degree and service setting, so that a policy for processing the data packet in the buffer queue is determined by the buffer queue length, which may better resist a problem of poor service experience caused by the network jitter.
Embodiments of the present application also provide a chip system, as shown in fig. 11, the chip system 1100 includes at least one processor 1101 and at least one interface circuit 1102. The processor 1101 and interface circuit 1102 may be interconnected by wires. For example, interface circuit 1102 may be used to receive signals from other devices (e.g., a memory of an electronic apparatus). For another example, the interface circuit 1102 may be used to send signals to other devices (e.g., the processor 1101). The interface circuit 1102 may, for example, read instructions stored in a memory and send the instructions to the processor 1101. The instructions, when executed by the processor 1101, may cause the electronic device to perform the various steps of the embodiments described above. Of course, the chip system may also include other discrete devices, which are not specifically limited in this embodiment of the present application.
The embodiment of the application also provides a computer storage medium, which comprises computer instructions, when the computer instructions run on the electronic device, the electronic device is caused to execute the functions or steps executed by the mobile phone in the embodiment of the method.
The present application also provides a computer program product, which when run on a computer, causes the computer to perform the functions or steps performed by the mobile phone in the above-mentioned method embodiments.
It will be apparent to those skilled in the art from this description that, for convenience and brevity of description, only the above-described division of the functional modules is illustrated, and in practical application, the above-described functional allocation may be performed by different functional modules according to needs, i.e. the internal structure of the apparatus is divided into different functional modules to perform all or part of the functions described above.
In the several embodiments provided in this application, it should be understood that the disclosed systems, apparatuses, and methods may be implemented in other ways. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of the modules or units is merely a logical functional division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another apparatus, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and the parts displayed as units may be one physical unit or a plurality of physical units, may be located in one place, or may be distributed in a plurality of different places. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in each embodiment of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a readable storage medium. Based on such understanding, the technical solution of the embodiments of the present application may be essentially or a part contributing to the prior art or all or part of the technical solution may be embodied in the form of a software product stored in a storage medium, including several instructions for causing a device (may be a single-chip microcomputer, a chip or the like) or a processor (processor) to perform all or part of the steps of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: a usb disk, a removable hard disk, a read-only memory (read on ly memory, ROM), a random access memory (random access memory, RAM), a magnetic disk, or an optical disk, or the like, which can store program codes.
The foregoing is merely a specific embodiment of the present application, but the scope of the present application is not limited thereto, and any changes or substitutions within the technical scope of the present disclosure should be covered in the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.
Claims (12)
1. The method for adjusting the buffer queue is characterized by being applied to a receiving end, wherein the receiving end establishes wireless connection with a sending end, and the receiving end is used for receiving and buffering a data packet of a first service from the sending end and then processing the data packet so as to execute the first service, and the method comprises the following steps:
responding to the initiating operation of the first service, obtaining a target service type of the first service, and determining a service setting time delay corresponding to the target service type; the service setting delay is the time required for the first service to be set for the first service to start processing a corresponding data packet from the transmitting end to the receiving end;
acquiring the current first transmission delay of the receiving end and the transmitting end; the first transmission delay is used for indicating the time spent for sending a data packet of the first service from the sending end to the receiving end and receiving the corresponding data packet;
Determining jitter buffer time delay based on the jitter indication value of the first transmission time delay; the jitter indication value is used for indicating the jitter degree of the first transmission delay, and the jitter buffer delay is used for indicating the time for transmitting the one data packet from the transmitting end to the receiving end under the jitter degree, wherein the time is needed to buffer the corresponding data packet against jitter;
based on the service set time delay, the first transmission time delay, the jitter buffer time delay and the packet sending interval of the data packet sent by the sending end, the length K of a buffer queue is adjusted; the buffer queue is used for buffering the data packets of the first service, and the length K of the buffer queue is equal to the number of the data packets which can be buffered in the buffer queue.
2. The method for adjusting a buffer queue according to claim 1, wherein the obtaining the current first transmission delay between the receiving end and the transmitting end includes:
after responding to the initiating operation of the first service, transmitting a transmission frame to the transmitting end at intervals of a first preset duration; the transmission frame comprises a first waiting time from the receiving time of the first data packet of the first service received by the receiving end in the first preset duration to the sending time of the transmission frame;
Receiving a transmission acknowledgement frame from the transmitting end; the transmission acknowledgement frame includes a first round trip delay, where the first round trip delay is a difference between a second waiting time of the transmitting end and the first waiting time, the first round trip delay is a sum of the first transmission delay and a second transmission delay of the transmission frame, and the second waiting time is a time between a transmission time of the transmitting end to a reception time of the transmission frame from the receiving end;
calculating a second round trip delay based on the sending time of the transmission frame and the receiving time of the transmission acknowledgement frame to obtain a second transmission delay of the transmission frame; wherein, the sending time of the transmission frame is the same as the receiving time of the transmission confirmation frame;
and calculating the difference value between the first round trip delay and the second transmission delay of the transmission frame to obtain the first transmission delay.
3. The buffer queue adjustment method according to claim 1 or 2, wherein the service setting delay includes a minimum fixed value and a maximum fixed value, the method comprising:
if the first transmission delay is smaller than the minimum fixed value, the service setting delay is the minimum fixed value;
And if the first transmission delay is greater than or equal to the minimum fixed value, setting the delay as the maximum fixed value by the service.
4. A buffer queue adjustment method according to any one of claims 1-3, characterized in that before the determining a jitter buffer delay based on the jitter indication value of the first transmission delay, it further comprises:
determining a jitter indication value based on the average deviation value of the first transmission delay in a second preset duration;
wherein the determining the jitter buffer time delay based on the jitter indication value of the first transmission time delay includes:
acquiring frequency band characteristic information of the receiving end and the transmitting end, and determining a jitter coefficient based on the frequency band characteristic information; the frequency band characteristic information is used for indicating the relation between the frequency points of the receiving end and the sending end and the channels;
and determining the jitter buffer time delay based on the product of the jitter indication value and the jitter coefficient.
5. The cache queue adjustment method of claim 4, further comprising:
and if the jitter buffer time delay is smaller than the packet sending interval, the jitter buffer time delay is adjusted to be 0.
6. The method for adjusting a buffer queue according to claim 4 or 5, wherein the target service type is further provided with a corresponding jitter buffer delay threshold, the method comprising:
And if the jitter buffer time delay is larger than the jitter buffer time delay threshold, the jitter buffer time delay is the jitter buffer time delay threshold.
7. The method for adjusting a buffer queue according to any one of claims 1 to 6, wherein adjusting the length K of the buffer queue based on the service set delay, the first transmission delay, the jitter buffer delay, and a packet interval of the data packet sent by the sender includes:
based on the service set time delay T1, the first transmission time delay T2, the jitter buffer time delay T3 and the packet sending interval S of the data packet sent by the sending end, the following formula is adopted:
K=(T1-T2+T3)/S
and calculating the length K of the cache queue.
8. The method for adjusting a buffer queue according to any one of claims 1 to 7, wherein after the adjusting the length K of the buffer queue based on the service set delay, the first transmission delay, the jitter buffer delay, and the packet interval of the data packet sent by the sender, the method further comprises:
and acquiring the actual number K of the data packets currently cached in the cache queue, and adjusting the processing strategy of the cached data packets in the cache queue based on the actual number K and the length K of the cache queue until the cache queue caches K data packets, and processing the cached data packets in the cache queue according to the packet sending interval.
9. The method for adjusting a buffer queue according to claim 8, wherein adjusting the processing policy of the buffered data packet in the buffer queue based on the actual number K and the length K of the buffer queue comprises:
if the actual number K is smaller than the length K of the buffer queue, processing the data packets buffered in the buffer queue according to the packet sending interval, recording the lost data packets in the buffer queue, and retransmitting the lost data packets;
if the actual number K is greater than the length K of the buffer queue, processing the data packets buffered in the buffer queue according to a target interval or performing packet loss processing on the data packets in the buffer queue; wherein the target interval is less than the packet interval;
and if the actual number k is 0, immediately processing the corresponding data packet after the buffer queue buffers one data packet.
10. The method for adjusting a buffer queue according to claim 9, wherein recording the lost data packet in the buffer queue, and retransmitting the lost data packet, comprises:
transmitting the lost data packet to a retransmission queue;
If the sequence number of the data packet retransmitted in the retransmission queue is smaller than the sequence number of the data packet processed currently, the corresponding data packet is not retransmitted;
and if the sequence number of the data packet retransmitted in the retransmission queue is larger than the sequence number of the data packet currently processed, retransmitting the corresponding data packet.
11. An electronic device, the electronic device comprising: a communication module, a memory, and one or more processors; the communication module, the memory, and the processor are coupled; the memory is for storing computer program code comprising computer instructions which, when executed by the electronic device, cause the electronic device to perform the method of any of claims 1-10.
12. A computer readable storage medium, characterized in that the computer readable storage medium has stored therein computer instructions, which when run in an electronic device, cause the electronic device to perform the method of any of claims 1 to 10.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310782580.3A CN117729588B (en) | 2023-06-28 | 2023-06-28 | Cache queue adjusting method and electronic equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310782580.3A CN117729588B (en) | 2023-06-28 | 2023-06-28 | Cache queue adjusting method and electronic equipment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN117729588A true CN117729588A (en) | 2024-03-19 |
CN117729588B CN117729588B (en) | 2024-09-17 |
Family
ID=90205807
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310782580.3A Active CN117729588B (en) | 2023-06-28 | 2023-06-28 | Cache queue adjusting method and electronic equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117729588B (en) |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20150092585A1 (en) * | 2013-09-30 | 2015-04-02 | Apple Inc. | Adjusting a Jitter Buffer based on Inter Arrival Jitter |
CN113037853A (en) * | 2021-03-22 | 2021-06-25 | 北京字节跳动网络技术有限公司 | Data processing method, device, equipment and storage medium |
CN113254120A (en) * | 2021-04-02 | 2021-08-13 | 荣耀终端有限公司 | Data processing method and related device |
CN114979091A (en) * | 2022-07-28 | 2022-08-30 | 腾讯科技(深圳)有限公司 | Data transmission method, related device, equipment and storage medium |
CN116095395A (en) * | 2021-11-08 | 2023-05-09 | 腾讯科技(深圳)有限公司 | Method and device for adjusting buffer length, electronic equipment and storage medium |
-
2023
- 2023-06-28 CN CN202310782580.3A patent/CN117729588B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20150092585A1 (en) * | 2013-09-30 | 2015-04-02 | Apple Inc. | Adjusting a Jitter Buffer based on Inter Arrival Jitter |
CN113037853A (en) * | 2021-03-22 | 2021-06-25 | 北京字节跳动网络技术有限公司 | Data processing method, device, equipment and storage medium |
CN113254120A (en) * | 2021-04-02 | 2021-08-13 | 荣耀终端有限公司 | Data processing method and related device |
CN116095395A (en) * | 2021-11-08 | 2023-05-09 | 腾讯科技(深圳)有限公司 | Method and device for adjusting buffer length, electronic equipment and storage medium |
CN114979091A (en) * | 2022-07-28 | 2022-08-30 | 腾讯科技(深圳)有限公司 | Data transmission method, related device, equipment and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN117729588B (en) | 2024-09-17 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP3893475B1 (en) | Method for automatically switching bluetooth audio encoding method and electronic apparatus | |
WO2020124610A1 (en) | Transmission speed control method and device | |
WO2020143380A1 (en) | Data transmission method and electronic device | |
WO2021147427A1 (en) | Method for determining fall back power and method for adjusting transmission power | |
EP4236434A1 (en) | Channel switching method, electronic device, and storage medium | |
EP4135405A1 (en) | Channel switching method, and electronic device and storage medium | |
US20240022518A1 (en) | Channel Switching Method, Electronic Device and Storage Medium | |
CN111316604B (en) | Data transmission method and electronic equipment | |
WO2021043250A1 (en) | Bluetooth communication method, and related device | |
WO2023185893A1 (en) | Satellite signal capturing method and related apparatus | |
US20240224357A1 (en) | Data Download Method, Apparatus, and Terminal Device | |
CN115694598A (en) | Multiframe fusion transmission method and related device in Beidou communication system | |
CN114205336A (en) | Cross-device audio playing method, mobile terminal, electronic device and storage medium | |
CN116709432B (en) | Cache queue adjusting method and electronic equipment | |
WO2023124186A1 (en) | Communication method and communication apparatus | |
CN117729588B (en) | Cache queue adjusting method and electronic equipment | |
WO2022199613A1 (en) | Method and apparatus for synchronous playback | |
WO2022111712A1 (en) | Audio and video synchronization method and device | |
CN113810965B (en) | Channel switching method, electronic device and storage medium | |
WO2021197115A1 (en) | Antenna tuning method and apparatus, and electronic device and network device | |
WO2021114950A1 (en) | Multipath http channel multiplexing method and terminal | |
CN116708317B (en) | Data packet MTU adjustment method and device and terminal equipment | |
RU2802678C1 (en) | Channel switching method, electronic device and storage medium | |
CN113453274B (en) | Uplink data distribution method and terminal | |
CN116709368B (en) | Network acceleration method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |