WO2023065782A1 - 数据传输的方法及电子设备 - Google Patents

数据传输的方法及电子设备 Download PDF

Info

Publication number
WO2023065782A1
WO2023065782A1 PCT/CN2022/111210 CN2022111210W WO2023065782A1 WO 2023065782 A1 WO2023065782 A1 WO 2023065782A1 CN 2022111210 W CN2022111210 W CN 2022111210W WO 2023065782 A1 WO2023065782 A1 WO 2023065782A1
Authority
WO
WIPO (PCT)
Prior art keywords
driver
queue
data packets
thread
electronic device
Prior art date
Application number
PCT/CN2022/111210
Other languages
English (en)
French (fr)
Inventor
付鹏程
马克西姆彼得罗夫
尼古拉耶维奇 氏那卡鹿克德米特里
李家欣
刘海军
福明亚历山大
福明罗曼
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2023065782A1 publication Critical patent/WO2023065782A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L9/00Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols

Definitions

  • the present application relates to the communication field, and more specifically, to a data transmission method and electronic equipment.
  • Device-to-device (Device-to-Device, D2D) communication is a technology that allows devices to communicate directly by multiplexing cell resources under the control of the system.
  • the packet loss rate is an important parameter to measure the performance index of D2D communication.
  • unreliable communication for example, based on user datagram protocol (UDP) communication
  • UDP user datagram protocol
  • a high packet loss rate means that the transmitted data packets are directly lost, resulting in poor transmission quality and data distortion.
  • transmission control protocol transmission control protocol
  • a high packet loss rate will mean more data packet retransmissions, which will lead to a serious decrease in the actual throughput of the transmission.
  • the present application provides a data transmission method and electronic equipment.
  • the method can reduce the packet loss rate of the electronic equipment and improve the quality of data packet transmission.
  • a data packet transmission method is provided, the method is applied to an electronic device, and the method includes: before the first driver thread reads the data packet in the hardware buffer area from the hardware buffer area, through the The first driver thread determines that the second driver queue is full, and the data packet stored in the hardware buffer area is a data packet received by the electronic device from another electronic device; n data packets in the hardware buffer area are read to the first driver queue, and the n is less than or equal to a preset value; the second driver thread reads the data packets in the first driver queue from the first driver queue Before the data packet, the second driver thread determines that the second driver queue is full; the second driver thread does not read the data packets in the first driver queue to the second driver queue; Before the third driver thread reads the data packet in the second driver queue from the second driver queue, it is determined by the third driver thread that the third driver queue is full; by the third driver thread not Reading the data packets of the second driving queue to the third driving queue.
  • the preset value may be 1.
  • the hardware cache area is a cache area corresponding to the hardware layer of the electronic device.
  • first driver thread, the second driver thread and the third driver thread are threads corresponding to the driver layer of the electronic device.
  • first driver queue, the second driver queue and the third driver queue are configured by the driver layer of the electronic device.
  • the above technical solution abandons the original chimney-type data packet transmission method, and adopts a data packet transmission method with a feedback mechanism, that is, when the third driver queue starts to fill up with data packets, the third driver thread will stop from the second driver thread.
  • the data packets are read in the queue, so that the data packets are accumulated in the second driver queue.
  • the second driver queue reaches its storage limit (that is, the second driver queue is full)
  • the first driver thread will slow down the rate at which data packets are read from the hardware buffer to the first driver queue, and the second driver thread Stop reading data packets from the first driver queue to the second driver queue.
  • the electronic device when at least one of the first driving queue, the second driving queue and the third driving queue of the electronic device is in a saturated state, the electronic device will start a flow control mechanism such as a sliding window mechanism. Since at least one queue of the electronic device is in a saturated state, there is a situation that the electronic device will not feed back the feedback information of the corresponding data packet to the sender (another electronic device) on the first target window, for example, the feedback information Including non-acknowledgment (Negative acknowledgment, NACK) information or acknowledgment (acknowledgment, ACK) information.
  • NACK non-acknowledgment
  • ACK acknowledgment
  • the other electronic device will not detect the feedback information of the corresponding data packet on the second target window, and then the other electronic device will slow down the rate of sending data packets to the electronic device, reducing the speed of the other electronic device. processor load, saving power consumption.
  • the rate at which the electronic device receives the data packets will also slow down accordingly, and then the rate at which the electronic device stores the data packets in the hardware buffer also slows down. In this way, the rate at which another electronic device sends data packets and the rate at which the electronic device receives data packets maintains a state of dynamic balance, thereby reducing the packet loss rate of the electronic device and improving the quality of data packet transmission.
  • the method further includes: before the first driver thread reads the data packets in the hardware cache from the hardware cache, through the The first driver thread determines that the second driver queue is not full; reads the p1 data packets in the hardware buffer area to the first driver queue through the first driver thread; wherein, the p1 data packets
  • the method further includes: the method further includes: reading the data packets in the third driver queue to a kernel layer through a kernel thread.
  • the first kernel thread is a thread corresponding to the kernel layer of the electronic device.
  • the above technical solution abandons the original chimney-type data packet transmission method, and adopts a data packet transmission method with a feedback mechanism, that is, when the rate at which the kernel layer reads data packets from the driver layer is faster than the third driver queue in the driver layer When the rate of accumulated data packets is slow, and the difference between the rate at which the kernel layer reads data packets from the driver layer and the rate at which the third driver queue in the driver layer starts to accumulate data packets reaches the target preset value, the third driver queue begins to fill up In this way, the third driver thread will stop reading data packets from the second driver queue, so that the data packets will be accumulated in the second driver queue.
  • the first driver thread When the second driver queue reaches its storage limit (that is, the second driver queue is full), the first driver thread will slow down the rate at which data packets are read from the hardware buffer to the first driver queue, and the second driver thread Stop reading data packets from the first driver queue to the second driver queue. Furthermore, the balance between the rate at which the kernel layer reads data packets from the hardware layer and the rate at which the driver layer reads data packets to the third driver queue can be dynamically maintained.
  • an electronic device in a second aspect, includes: a first driver thread, configured to, before the first driver thread reads a data packet in the hardware buffer from the hardware buffer, determine The second driver queue is full, and the data packets stored in the hardware buffer area are data packets received by the electronic device from another electronic device; the first driver thread is also used to transfer the hardware buffer area The n data packets in are read to the first driver queue, and the n is less than or equal to a preset value; the second driver thread is used to read the second driver thread from the first driver queue Before the data packets in the first driving queue, it is determined by the second driving thread that the second driving queue is full; the second driving thread is also used not to read the data packets in the first driving queue Get to the second driver queue; a third driver thread, used to determine the third driver queue before the third driver thread reads the data packets in the second driver queue from the second driver queue full; the third driver thread is further configured not to read the data packets of the second driver queue to the third driver queue.
  • the preset value is 1.
  • the above-mentioned electronic equipment abandons the original chimney-type data packet transmission method, and adopts a data packet transmission method with a feedback mechanism, that is, when the third driver queue starts to fill up with data packets, the third driver thread will stop from the second driver thread.
  • the data packets are read in the queue, so that the data packets are accumulated in the second driver queue.
  • the second driver queue reaches its storage limit (that is, the second driver queue is full)
  • the first driver thread will slow down the rate at which data packets are read from the hardware buffer to the first driver queue, and the second driver thread Stop reading data packets from the first driver queue to the second driver queue. In this way, when at least one of the first drive queue, the second drive queue and the third drive queue of the electronic device is in a saturated state,
  • the electronic device activates a flow control mechanism such as a sliding window mechanism. Since at least one queue of the electronic device is in a saturated state, there is a situation that the electronic device will not feed back the feedback information of the corresponding data packet to the sender (another electronic device) on the first target window, for example, the feedback information Including non-acknowledgment (Negative acknowledgment, NACK) information or acknowledgment (acknowledgment, ACK) information.
  • NACK non-acknowledgment
  • ACK acknowledgment
  • the other electronic device will not detect the feedback information of the corresponding data packet on the second target window, and then the other electronic device will slow down the rate of sending data packets to the electronic device, reducing the speed of the other electronic device. processor load, saving power consumption.
  • the rate at which the electronic device receives the data packets will also slow down accordingly, and then the rate at which the electronic device stores the data packets in the hardware buffer also slows down. In this way, the rate at which another electronic device sends data packets and the rate at which the electronic device receives data packets maintains a state of dynamic balance, thereby reducing the packet loss rate of the electronic device and improving the quality of data packet transmission.
  • the electronic device further includes: a kernel thread, configured to read the data packets in the third driver queue to a kernel layer.
  • the above-mentioned electronic equipment abandons the original chimney-type data packet transmission method, and adopts a data packet transmission method with a feedback mechanism, that is, when the rate at which the kernel layer reads data packets from the driver layer is faster than the third driver queue in the driver layer starts When the rate of accumulated data packets is slow, and the difference between the rate at which the kernel layer reads data packets from the driver layer and the rate at which the third driver queue in the driver layer starts to accumulate data packets reaches the target preset value, the third driver queue begins to fill up In this way, the third driver thread will stop reading data packets from the second driver queue, so that the data packets will be accumulated in the second driver queue.
  • the first driver thread When the second driver queue reaches its storage limit (that is, the second driver queue is full), the first driver thread will slow down the rate at which data packets are read from the hardware buffer to the first driver queue, and the second driver thread Stop reading data packets from the first driver queue to the second driver queue. Furthermore, the balance between the rate at which the kernel layer reads data packets from the hardware layer and the rate at which the driver layer reads data packets to the third driver queue can be dynamically maintained.
  • an electronic device including at least one memory and at least one processor, the at least one memory is used to store a program, and the at least one processor is used to run the program, so as to implement the first aspect and The data packet transmission method described in any possible implementation manner thereof.
  • a computer-readable storage medium including computer instructions, and when the computer instructions are run on an electronic device, the electronic device executes the method described in the first aspect and any possible implementation thereof.
  • the method of data packet transmission is provided.
  • a chip including at least one processor and an interface circuit, the interface circuit is used to provide program instructions or data for the at least one processor, and the at least one processor is used to execute the program instructions , so as to implement the data packet transmission method described in the first aspect and any possible implementation manner thereof.
  • a computer program product including computer instructions.
  • the computer instructions When the computer instructions are run on an electronic device, the data packet transmission method as described in the first aspect and any possible implementation thereof is provided. be executed.
  • FIG. 1 is a schematic structural diagram of an example of an electronic device provided by an embodiment of the present application.
  • Fig. 2 is a block diagram of a software structure provided by the embodiment of the present application.
  • FIG. 3 is a schematic diagram of an example of a data packet transmission path provided by an embodiment of the present application.
  • FIG. 4 is a schematic flowchart of steps performed by a first driver thread during the transmission of a data packet provided by an embodiment of the present application.
  • FIG. 5 is a schematic flowchart of steps performed by the second driver thread during the transmission of data packets provided by the embodiment of the present application.
  • FIG. 6 is a schematic flowchart of steps performed by a third driver thread during the transmission of a data packet provided by an embodiment of the present application.
  • Fig. 7 is a schematic flowchart of an example of a data packet transmission method provided by an embodiment of the present application.
  • FIG. 8 is a schematic structural diagram of another example of electronic equipment provided by the embodiment of the present application.
  • the data packet transmission method provided in the embodiment of the present application can be applied to electronic devices based on UDP protocol or electronic devices based on TCP protocol.
  • the UDP protocol is a connectionless transport layer communication protocol.
  • the UDP protocol does not provide a guarantee mechanism for data delivery. If a data packet is lost during the transmission from the sender to the receiver, the protocol itself cannot make any detection or prompt. Therefore, people usually refer to the UDP protocol as an unreliable transmission protocol.
  • the TCP protocol is a connection-oriented, reliable, byte stream-based transport layer communication protocol.
  • the TCP protocol includes a special delivery guarantee mechanism. When the data receiver receives the information from the sender, it will automatically send a confirmation message to the sender; the sender will continue to transmit other information only after receiving the confirmation message, otherwise Will wait until confirmation is received.
  • the electronic equipment in the embodiments of the present application may refer to user equipment, access terminal, subscriber unit, subscriber station, mobile station, mobile station, remote station, remote terminal, mobile device, user terminal, terminal, wireless communication device, user agent, or user device.
  • the terminal equipment can also be a cellular phone, a cordless phone, a Session Initiation Protocol (Session Initiation Protocol, SIP) phone, a wireless local loop (Wireless Local Loop, WLL) station, a personal digital processing (Personal Digital Assistant, PDA), a wireless communication Functional handheld devices, computing devices or other processing devices connected to wireless modems, vehicle-mounted devices, wearable devices, terminal devices in the future 5G network or future evolution of the public land mobile network (Public Land Mobile Network, PLMN)
  • SIP Session Initiation Protocol
  • WLL Wireless Local Loop
  • PDA Personal Digital Assistant
  • FIG. 1 shows a schematic structural diagram of an electronic device 100 .
  • the electronic device 100 may include a processor 110, an external memory interface 120, an internal memory 121, a universal serial bus (universal serial bus, USB) interface 130, a charging management module 140, a power management module 141, a battery 142, an antenna 1, and an antenna 2 , mobile communication module 150, wireless communication module 160, audio module 170, speaker 170A, receiver 170B, microphone 170C, earphone jack 170D, sensor module 180, button 190, motor 191, indicator 192, camera 193, display screen 194, and A subscriber identification module (subscriber identification module, SIM) card interface 195 and the like.
  • SIM subscriber identification module
  • the sensor module 180 may include a pressure sensor 180A, a gyroscope sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity light sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, an ambient light sensor 180L, bone conduction sensor 180M, etc.
  • the structure illustrated in the embodiment of the present application does not constitute a specific limitation on the electronic device 100 .
  • the electronic device 100 may include more or fewer components than shown in the figure, or combine certain components, or separate certain components, or arrange different components.
  • the illustrated components can be realized in hardware, software or a combination of software and hardware.
  • the processor 110 may include one or more processing units, for example: the processor 110 may include an application processor (application processor, AP), a modem processor, a graphics processing unit (graphics processing unit, GPU), an image signal processor (image signal processor, ISP), controller, memory, video codec, digital signal processor (digital signal processor, DSP), baseband processor, and/or neural network processor (neural-network processing unit, NPU) wait. Wherein, different processing units may be independent devices, or may be integrated in one or more processors.
  • application processor application processor, AP
  • modem processor graphics processing unit
  • GPU graphics processing unit
  • image signal processor image signal processor
  • ISP image signal processor
  • controller memory
  • video codec digital signal processor
  • DSP digital signal processor
  • baseband processor baseband processor
  • neural network processor neural-network processing unit, NPU
  • the controller may be the nerve center and command center of the electronic device 100 .
  • the controller can generate an operation control signal according to the instruction opcode and timing signal, and complete the control of fetching and executing the instruction.
  • a memory may also be provided in the processor 110 for storing instructions and data.
  • the memory in processor 110 is a cache memory.
  • the memory may hold instructions or data that the processor 110 has just used or recycled. If the processor 110 needs to use the instruction or data again, it can be called directly from the memory. Repeated access is avoided, and the waiting time of the processor 110 is reduced, thus improving the efficiency of the system.
  • processor 110 may include one or more interfaces.
  • the interface may include an integrated circuit (inter-integrated circuit, I2C) interface, an integrated circuit built-in audio (inter-integrated circuit sound, I2S) interface, a pulse code modulation (pulse code modulation, PCM) interface, a universal asynchronous transmitter (universal asynchronous receiver/transmitter, UART) interface, mobile industry processor interface (mobile industry processor interface, MIPI), general-purpose input and output (general-purpose input/output, GPIO) interface, subscriber identity module (subscriber identity module, SIM) interface, and /or universal serial bus (universal serial bus, USB) interface, etc.
  • I2C integrated circuit
  • I2S integrated circuit built-in audio
  • PCM pulse code modulation
  • PCM pulse code modulation
  • UART universal asynchronous transmitter
  • MIPI mobile industry processor interface
  • GPIO general-purpose input and output
  • subscriber identity module subscriber identity module
  • SIM subscriber identity module
  • USB universal serial bus
  • the interface connection relationship between the modules shown in the embodiment of the present application is only a schematic illustration, and does not constitute a structural limitation of the electronic device 100 .
  • the electronic device 100 may also adopt different interface connection manners in the foregoing embodiments, or a combination of multiple interface connection manners.
  • the wireless communication function of the electronic device 100 can be realized by the antenna 1 , the antenna 2 , the mobile communication module 150 , the wireless communication module 160 , a modem processor, a baseband processor, and the like.
  • Antenna 1 and Antenna 2 are used to transmit and receive electromagnetic wave signals.
  • Each antenna in electronic device 100 may be used to cover single or multiple communication frequency bands. Different antennas can also be multiplexed to improve the utilization of the antennas.
  • the mobile communication module 150 can provide wireless communication solutions including 2G/3G/4G/5G applied on the electronic device 100 .
  • the mobile communication module 150 may include at least one filter, switch, power amplifier, low noise amplifier (low noise amplifier, LNA) and the like.
  • the mobile communication module 150 can receive electromagnetic waves through the antenna 1, filter and amplify the received electromagnetic waves, and send them to the modem processor for demodulation.
  • the mobile communication module 150 can also amplify the signals modulated by the modem processor, and convert them into electromagnetic waves through the antenna 1 for radiation.
  • at least part of the functional modules of the mobile communication module 150 may be set in the processor 110 .
  • at least part of the functional modules of the mobile communication module 150 and at least part of the modules of the processor 110 may be set in the same device.
  • the antenna 1 of the electronic device 100 is coupled to the mobile communication module 150, and the antenna 2 is coupled to the wireless communication module 160, so that the electronic device 100 can communicate with the network and other devices through wireless communication technology.
  • the wireless communication technology may include global system for mobile communications (GSM), general packet radio service (general packet radio service, GPRS), code division multiple access (code division multiple access, CDMA), broadband Code division multiple access (wideband code division multiple access, WCDMA), time division code division multiple access (time-division code division multiple access, TD-SCDMA), long term evolution (long term evolution, LTE), BT, GNSS, WLAN, NFC , FM, and/or IR techniques, etc.
  • GSM global system for mobile communications
  • GPRS general packet radio service
  • code division multiple access code division multiple access
  • CDMA broadband Code division multiple access
  • WCDMA wideband code division multiple access
  • time division code division multiple access time-division code division multiple access
  • TD-SCDMA time-division code division multiple access
  • the GNSS may include a global positioning system (global positioning system, GPS), a global navigation satellite system (global navigation satellite system, GLONASS), a Beidou navigation satellite system (beidou navigation satellite system, BDS), a quasi-zenith satellite system (quasi -zenith satellite system (QZSS) and/or satellite based augmentation systems (SBAS).
  • GPS global positioning system
  • GLONASS global navigation satellite system
  • Beidou navigation satellite system beidou navigation satellite system
  • BDS Beidou navigation satellite system
  • QZSS quasi-zenith satellite system
  • SBAS satellite based augmentation systems
  • the electronic device 100 realizes the display function through the GPU, the display screen 194 , and the application processor.
  • the GPU is a microprocessor for image processing, and is connected to the display screen 194 and the application processor. GPUs are used to perform mathematical and geometric calculations for graphics rendering.
  • Processor 110 may include one or more GPUs that execute program instructions to generate or change display information.
  • the external memory interface 120 can be used to connect an external memory card, such as a Micro SD card, so as to expand the storage capacity of the electronic device 100.
  • the external memory card communicates with the processor 110 through the external memory interface 120 to implement a data storage function. Such as saving music, video and other files in the external memory card.
  • the internal memory 121 may be used to store computer-executable program codes including instructions.
  • the processor 110 executes various functional applications and data processing of the electronic device 100 by executing instructions stored in the internal memory 121 .
  • the internal memory 121 may include an area for storing programs and an area for storing data.
  • the stored program area can store an operating system, at least one application program required by a function (such as a sound playing function, an image playing function, etc.) and the like.
  • the storage data area can store data created during the use of the electronic device 100 (such as audio data, phonebook, etc.) and the like.
  • the internal memory 121 may include a high-speed random access memory, and may also include a non-volatile memory, such as at least one magnetic disk storage device, flash memory device, universal flash storage (universal flash storage, UFS) and the like.
  • the electronic device 100 can implement audio functions through the audio module 170 , the speaker 170A, the receiver 170B, the microphone 170C, the earphone interface 170D, and the application processor. Such as music playback, recording, etc.
  • the software system of the electronic device 100 may adopt a layered architecture, an event-driven architecture, a micro-kernel architecture, a micro-service architecture, or a cloud architecture.
  • a software system with a layered architecture is taken as an example to illustrate the software structure of the electronic device 100 .
  • FIG. 2 is a block diagram of the software structure of the electronic device 100 according to the embodiment of the present application.
  • the layered architecture divides the software into several layers, and each layer has a clear role and division of labor. Layers communicate through software interfaces.
  • the software system is divided into four layers, which are respectively an application layer, a kernel layer, a driver layer and a hardware layer from top to bottom.
  • the hardware layer may include one or more chips, and the one or more chips may store data packets generated by itself and/or received data packets sent from other devices.
  • the chip may include but not limited to a wireless fidelity (wireless fidelity, WiFi) chip.
  • the hardware layer may apply for a buffer in a chip of the hardware layer, and buffer the received data packet into the buffer applied for by the hardware layer.
  • the driver layer can read data packets from one or more chips in the hardware layer.
  • the driver layer may also apply for a buffer area at the driver layer, and cache the data packets read from the buffer area at the hardware layer into the buffer applied for at the driver layer.
  • the size of the cache area applied by the driver layer is a fixed value.
  • the kernel layer can read data packets from the driver layer. For example, the kernel layer reads the cached data packets from the buffer applied by the driver layer in the driver layer.
  • the application layer can read data packets from the kernel layer through the interface (socket) between the kernel layer and the application layer.
  • the hardware layer receives the data packets sent by other devices, they are forwarded layer by layer to the upper application layer. Every copy and forwarding of data packets may cause loss of data packets.
  • the kernel layer has no time to transfer the data packets from the buffer applied for by the driver layer.
  • the transmission method of forwarding the data packet from the driver layer to the kernel layer can be called a chimney transmission method, that is, only when the buffer has free space, can new data packets be stored. This will cause the new data packets read by the driver layer from the hardware layer to be discarded, resulting in packet loss.
  • the data transmitted based on the UDP protocol generally includes multimedia data. Although the lost multimedia data will not have a great impact on multimedia playback, it will affect the user's viewing effect and cause poor user experience.
  • the embodiment of the present application provides a data packet transmission method, which can be applied to the electronic device shown in FIG. 1 and/or FIG. 2 .
  • This method abandons the original chimney-style data packet transmission method, and adopts a data packet transmission method with a feedback mechanism, which can maintain a dynamic balance between the rate at which the sending end sends data packets and the rate at which the receiving end receives data packets. , thereby reducing the packet loss rate at the receiving end and improving the quality of data packet transmission.
  • the data packets involved may be data packets received by the electronic device and sent from other electronic devices.
  • the embodiment of the present application does not limit the communication manner of the data packets received by the electronic device and sent by other electronic devices.
  • an electronic device may receive data packets sent by other electronic devices through WiFi.
  • the application layer of the electronic device may be used to indicate the specific content of the data packet that the electronic device needs to receive.
  • the data packet transmission method provided by the embodiment of the present application will be described in detail by taking the data packet to be read by the driver layer through the read buffer three times and then read to the kernel layer as an example.
  • the driver layer has applied for three buffer areas in the driver layer, such as the first buffer area, the second buffer area and the third buffer area.
  • the first buffer area is the storage space of the first driver queue
  • the second buffer area is the storage space of the The storage space of the second driving queue
  • the third buffer area is the storage space of the third driving queue.
  • the threads for the driver layer to read data packets may only include the first driver thread and the second driver thread described below.
  • the first driving thread performs various steps performed by the first driving thread below
  • the second driving thread performs various steps performed by the second driving thread below.
  • the kernel thread reads the data packet from the second driver queue to the kernel buffer area of the kernel layer.
  • the thread for the driver layer to read the data packet also includes other threads. If the other threads are behind the first driver thread, the functions of the other threads may be similar to those of the second driver thread or the third driver thread. For the specific process, reference may be made to the relevant description of each step executed by the second driver thread or the third driver thread, which will not be repeated here. If the other threads are before the first driver thread, the functions of the other threads may be similar to those of the first driver thread described below. For the specific process, please refer to the relevant description of each step performed by the first driving thread below, and details will not be repeated here.
  • first driving thread, the second driving thread, and the third driving thread may be the same thread or different threads, which is not limited in this embodiment of the present application.
  • FIG. 3 is a schematic diagram of a data packet transmission path provided by the embodiment of the present application.
  • the hardware layer includes a hardware cache area.
  • the threads corresponding to the driver layer include three threads, for example, a first driver thread, a second driver thread, and a third driver thread.
  • the kernel layer includes a kernel cache area, and the threads corresponding to the kernel layer include kernel threads.
  • the first driver thread is used to read the data packets in the hardware buffer to the first driver queue of the driver layer
  • the second driver thread is used to read the data packets in the first driver queue to the second driver queue of the driver layer.
  • the driver queue, the third driver thread is used to read the data packets in the second driver queue to the third driver queue.
  • the kernel thread is used to read the data packets in the third driver queue into the kernel buffer area of the kernel layer.
  • FIG. 4 is a schematic flowchart of steps performed by a first driver thread during the transmission of a data packet provided by an embodiment of the present application.
  • the steps performed by the first driver thread include S210 to S230.
  • S210 to S230 will be specifically introduced.
  • the first driver thread Before the first driver thread reads the data packets in the hardware buffer from the hardware buffer to the first driver queue, the first driver thread determines whether the second driver queue is full.
  • the first driver thread may determine whether the second driver queue is full through other threads. For example, before 210, the second driver thread has executed S310, that is, the second driver thread has known whether the second driver queue is full, at this moment, the second driver thread can inform the first driver thread: whether the second driver queue is full Full. In some other embodiments, the first driver thread may determine whether the second driver queue is full by itself.
  • S210 can also be replaced by S210'.
  • the first driver thread determines whether the first driver queue is full.
  • the first driver thread reads p1 data packets from the hardware buffer to the first driver queue.
  • Q min ⁇ Q1, Q2 ⁇ , where Q is the size of p1 data packets, Q1 is the size of the data packets stored in the hardware buffer, and Q2 is the size of the remaining space of the first driver queue.
  • the first driver thread reads n data packets in the hardware buffer to the first driver queue.
  • n is less than or equal to a preset value.
  • the preset value in order to trigger a subsequent thread (for example, the second driver thread) to execute a corresponding task, the preset value may be equal to 1.
  • FIG. 5 is a schematic flow chart of steps performed by the second driver thread during the transmission of data packets provided by the embodiment of the present application.
  • the steps performed by the second driver thread include S310 to S330.
  • S310 to S330 will be specifically introduced.
  • the second driver thread Before the second driver thread reads the data packets in the first driver queue from the first driver queue to the second driver queue, the second driver thread determines whether the second driver queue is full.
  • the second driver thread may determine whether the second driver queue is full through other threads. For example, before 310, the first driver thread has executed S210, that is, the first driver thread has known whether the second driver queue is full, at this moment, the first driver thread can inform the second driver thread: whether the second driver queue is full Full. In other embodiments, the second driver thread can determine whether the second driver queue is full by itself.
  • the second driver thread reads the p2 data packets of the first driver queue to the second driver queue.
  • W min ⁇ W1, W2 ⁇ , where W is the size of p2 data packets, W1 is the size of the data packets stored in the first driver queue, and W2 is the size of the remaining space in the second driver queue.
  • the second driver thread can read all data packets (20k data packets) stored in the first driver queue to the second driver queue. At this time, the size of the p2 data packets described in S320 is 20k.
  • the second driver thread does not read data packets. That is, the second driver thread will not read the data packets in the first driver queue to the second driver queue.
  • FIG. 6 is a schematic flowchart of steps performed by a third driver thread during the transmission of a data packet provided by the embodiment of the present application.
  • the steps performed by the third driver thread include S410 to S430.
  • S410 to S430 will be specifically introduced.
  • the third driver thread determines whether the third driver queue has Full.
  • the third driver thread reads p3 data packets in the second driver queue to the third driver queue.
  • L min ⁇ L1, L2 ⁇ , wherein, L is the size of p3 data packets, L1 is the size of the data packets stored in the second drive queue, and L2 is the remaining space of the third drive queue size.
  • the third driver thread can read all data packets (10k data packets) in the second driver queue to the third driver queue. At this time, the size of the p3 data packets described in S420 is 10k.
  • the third driver thread does not read the data packet. That is, the third driver thread will not read the data packets in the second driver queue to the third driver queue.
  • the data packet transmission method provided by the embodiment of the present application realized through the execution steps of the above-mentioned threads (the first driving thread, the second driving thread and the third driving thread), abandons the original chimney-like method.
  • the data packet transmission method adopts the data packet transmission method with a feedback mechanism, that is, when the third driver queue starts to be full of data packets, the third driver thread will stop reading data packets from the second driver queue, so that the data packets accumulated in the second drive queue.
  • the second driver queue reaches its storage limit (that is, the second driver queue is full)
  • the first driver thread will slow down the rate at which data packets are read from the hardware buffer to the first driver queue, and the second driver thread Stop reading data packets from the first driver queue to the second driver queue.
  • the electronic device when at least one of the first driving queue, the second driving queue and the third driving queue of the electronic device is in a saturated state, the electronic device will start a flow control mechanism such as a sliding window mechanism. Since at least one queue of the electronic device is in a saturated state, there is a situation that the electronic device will not feed back the feedback information of the corresponding data packet to the sender (another electronic device) on the first target window, for example, the feedback information Including non-acknowledgment (Negative acknowledgment, NACK) information or acknowledgment (acknowledgment, ACK) information.
  • NACK non-acknowledgment
  • ACK acknowledgment
  • the other electronic device will not detect the feedback information of the corresponding data packet on the second target window, and then the other electronic device will slow down the rate of sending data packets to the electronic device, reducing the speed of the other electronic device. processor load, saving power consumption.
  • the rate at which the electronic device receives the data packets will also slow down accordingly, and then the rate at which the electronic device stores the data packets in the hardware buffer also slows down. In this way, the rate at which another electronic device sends data packets and the rate at which the electronic device receives data packets maintains a state of dynamic balance, thereby reducing the packet loss rate of the electronic device and improving the quality of data packet transmission.
  • the packet loss rate can be reduced and user experience can be improved; for electronic devices with TCP protocol, the number of retransmissions can be reduced, the power consumption of electronic devices is reduced, and the actual bandwidth of communication is reduced.
  • the kernel thread of the kernel layer can also read the p4 data packets in the third driver queue to the kernel cache area of the kernel layer.
  • K min ⁇ K1, K2 ⁇ , wherein, K is the size of p4 data packets, K1 is the size of the data packets stored in the third driver queue, and K2 is applied by the kernel layer at the kernel layer The size of the remaining space in the kernel cache.
  • the kernel layer can also send data reading request information to the application layer through the socket with the application layer. In this way, after the application layer receives the data reading request information, it will read the data packet from the kernel layer through the corresponding socket.
  • the application layer may also process the read data packets and display them on the electronic device.
  • the third driver queue starts to fill up with data packets, so that the third driver thread stops reading data from the second driver queue packet, so that the data packets are accumulated in the second driver queue.
  • the first driver thread When the second driver queue reaches its storage limit (that is, the second driver queue is full), the first driver thread will slow down the rate at which data packets are read from the hardware buffer to the first driver queue, and the second driver thread Stop reading data packets from the first driver queue to the second driver queue. Furthermore, the balance between the rate at which the kernel layer reads data packets from the hardware layer and the rate at which the driver layer reads data packets to the third driver queue can be dynamically maintained. In addition, for electronic equipment with UDP protocol, it can reduce the packet loss rate and improve user experience; for electronic equipment with TCP protocol, it can reduce the number of retransmissions, reduce the power consumption of electronic equipment, and reduce the actual bandwidth of communication.
  • Fig. 7 is a schematic flow chart of a method for data packet transmission provided by an embodiment of the present application. The method can be applied to electronic devices.
  • the method may include:
  • the first driver thread Before the first driver thread reads the data packets in the hardware buffer from the hardware buffer, determine through the first driver thread that the second driver queue is full. Wherein, the data packets stored in the hardware cache area are data packets received by the electronic device from another electronic device.
  • the second driver thread does not read the data packets in the first driver queue to the second driver queue.
  • the third driver thread does not read the data packets of the second driver queue into the third driver queue.
  • FIG. 8 is a schematic structural diagram of an example of an electronic device provided by an embodiment of the present application.
  • the electronic device 600 includes: at least one processor 610 and at least one memory 620 .
  • the at least one memory 620 is used to store a program, and the at least one processor 610 is used to run the program, so as to implement the data packet transmission method described above.
  • the memory 620 may be a read only memory (read only memory, ROM), a static storage device, a dynamic storage device or a random access memory (random access memory, RAM).
  • the memory 620 may store a program, and when the program stored in the memory 620 is executed by the processor 610, the processor 610 may be used to execute each step of the data packet transmission method provided by the embodiment of the present application. That is to say, the processor 610 may acquire stored instructions from the memory 620 to execute each step of the data packet transmission method provided in the embodiment of the present application.
  • the processor 610 may be a general-purpose central processing unit (central processing unit, CPU), a microprocessor, an application specific integrated circuit (application specific integrated circuit, ASIC), a graphics processing unit (graphic processing unit, GPU) or one or A plurality of integrated circuits are used to execute related programs, so as to realize the functions executed by each thread in the electronic device 600 provided in the embodiment of the present application.
  • CPU central processing unit
  • ASIC application specific integrated circuit
  • GPU graphics processing unit
  • a plurality of integrated circuits are used to execute related programs, so as to realize the functions executed by each thread in the electronic device 600 provided in the embodiment of the present application.
  • the processor 610 may also be an integrated circuit chip, which has a signal processing capability.
  • each step of the data packet transmission method provided by the embodiment of the present application may be completed by an integrated logic circuit of hardware in a processor or an instruction in the form of software.
  • the processor 610 may also be a general-purpose processor, a digital signal processor (digital signal processing, DSP), an application-specific integrated circuit (ASIC), FPGA or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components .
  • DSP digital signal processing
  • ASIC application-specific integrated circuit
  • FPGA field-programmable gate array
  • Each step of the data packet transmission method provided in the embodiment of the present application may be implemented or executed.
  • a general-purpose processor may be a microprocessor, or the processor may be any conventional processor, or the like.
  • Each step of the data packet transmission method provided in the embodiment of the present application may be directly implemented by a hardware decoding processor, or implemented by a combination of hardware and software modules in the decoding processor.
  • the software module can be located in a mature storage medium in the field such as random access memory, flash memory, read-only memory, programmable read-only memory or electrically erasable programmable memory, register.
  • the storage medium is located in the memory, and the processor reads the information in the memory, and combines its hardware to complete the functions required by the units included in any of the implementable ways of the electronic device 600 of the embodiment of the present application, or execute the embodiment of the present application.
  • the electronic device 600 further includes an interface circuit.
  • the interface circuit may use a transceiver device such as but not limited to a transceiver to implement communication between the electronic device and other devices or a communication network.
  • the interface circuit can also be, for example, a communication interface.
  • the embodiment of the present application also provides a computer-readable storage medium, the computer-readable storage medium has program instructions, and when the program instructions are executed directly or indirectly, the aforementioned data packet transmission method is realized.
  • the embodiment of the present application also provides a chip, including at least one processor and an interface circuit, the interface circuit is used to provide program instructions or data for the at least one processor, and the at least one processor is used to execute the program instructions , so that the transmission method of the data packet mentioned above can be realized.
  • the embodiment of the present application also provides a computer program product containing instructions, which, when run on a computing device, enables the computing device to execute the data packet transmission method described above, or enables the computing device to implement the electronic device described above function.
  • the above-mentioned embodiments may be implemented in whole or in part by software, hardware, firmware or other arbitrary combinations.
  • the above-described embodiments may be implemented in whole or in part in the form of computer program products.
  • the computer program product comprises one or more computer instructions or computer programs.
  • the processes or functions according to the embodiments of the present application will be generated in whole or in part.
  • the computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable devices.
  • the computer instructions may be stored in or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, the computer instructions may be transmitted from a website, computer, server or data center Transmission to another website site, computer, server or data center by wired (such as infrared, wireless, microwave, etc.).
  • the computer-readable storage medium may be any available medium that can be accessed by a computer, or a data storage device such as a server or a data center that includes one or more sets of available media.
  • the available media may be magnetic media (eg, floppy disk, hard disk, magnetic tape), optical media (eg, DVD), or semiconductor media.
  • the semiconductor medium may be a solid state drive.
  • the disclosed systems, devices and methods may be implemented in other ways.
  • the device embodiments described above are only illustrative.
  • the division of the units is only a logical function division. In actual implementation, there may be other division methods.
  • multiple units or components can be combined or May be integrated into another system, or some features may be ignored, or not implemented.
  • the mutual coupling or direct coupling or communication connection shown or discussed may be through some interfaces, and the indirect coupling or communication connection of devices or units may be in electrical, mechanical or other forms.
  • the units described as separate components may or may not be physically separated, and the components shown as units may or may not be physical units, that is, they may be located in one place, or may be distributed to multiple network units. Part or all of the units can be selected according to actual needs to achieve the purpose of the solution of this embodiment.
  • each functional unit in each embodiment of the present application may be integrated into one processing unit, each unit may exist separately physically, or two or more units may be integrated into one unit.
  • the functions described above are realized in the form of software function units and sold or used as independent products, they can be stored in a computer-readable storage medium.
  • the technical solution of the present application is essentially or the part that contributes to the prior art or the part of the technical solution can be embodied in the form of a software product, and the computer software product is stored in a storage medium, including Several instructions are used to make a computer device (which may be a personal computer, a server, or a network device, etc.) execute all or part of the steps of the methods described in the various embodiments of the present application.
  • the aforementioned storage media include: U disk, mobile hard disk, read-only memory (Read-Only Memory, ROM), random access memory (Random Access Memory, RAM), magnetic disk or optical disc and other media that can store program codes. .

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Security & Cryptography (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

一种数据包传输的方法及电子设备,该方法应用于电子设备,该方法包括:在第一驱动线程从硬件缓存区中读取数据包之前,第一驱动线程确定第二驱动队列已满,则第一驱动线程将硬件缓存区中的n个数据包读取至第一驱动队列,n小于或等于预设值;在第二驱动线程从第一驱动队列中读取数据包之前,第二驱动线程确定第二驱动队列已满,则第二驱动线程不将第一驱动队列中的数据包读取至第二驱动队列。在第三驱动线程从第二驱动队列中读取数据包之前,第三驱动线程确定第三驱动队列已满,则第三驱动线程不将第二驱动队列的数据包读取至第三驱动队列。

Description

数据传输的方法及电子设备
本申请要求于2021年10月20日提交俄罗斯联邦专利局、申请号为2021130496、申请名称为“数据传输的方法及电子设备”的俄罗斯联邦专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及通信领域,并且更具体地,涉及一种数据传输的方法及电子设备。
背景技术
设备对设备(Device-to-Device,D2D)通信是一种在系统的控制下,允许设备之间通过复用小区资源直接进行通信的技术。在D2D通信中,丢包率是衡量D2D通信性能指标的一个重要参数。对于不可靠通信来说,例如,基于用户数据报协议(user datagram protocol,UDP)的通信来说,高丢包率意味着传送的数据包直接丢失,会导致传输质量较差,数据失真。对于可靠通信来说,例如,基于传输控制协议(transmission control protocol,TCP)的通信来说,高丢包率会意味着数据包重传较多,会导致传输实际吞吐率下降严重。
发明内容
本申请提供一种数据传输的方法及电子设备,该方法可以降低电子设备的丢包率,提高数据包传输的质量。
第一方面,提供了一种数据包传输的方法,方法应用于电子设备,所述方法包括:在第一驱动线程从硬件缓存区中读取所述硬件缓存区中的数据包之前,通过所述第一驱动线程确定第二驱动队列已满,所述硬件缓存区中所存储的数据包为所述电子设备接收的来自另一电子设备的数据包;通过所述第一驱动线程将所述硬件缓存区中的n个数据包读取至第一驱动队列,所述n小于或等于预设值;在第二驱动线程从所述第一驱动队列中读取所述第一驱动队列中的数据包之前,通过所述第二驱动线程确定所述第二驱动队列已满;通过所述第二驱动线程不将所述第一驱动队列中的数据包读取至所述第二驱动队列;在第三驱动线程从所述第二驱动队列中读取所述第二驱动队列中的数据包之前,通过所述第三驱动线程确定第三驱动队列已满;通过所述第三驱动线程不将所述第二驱动队列的数据包读取至所述第三驱动队列。
示例性地,所述预设值可以为1。
应理解,硬件缓存区为电子设备的硬件层对应的缓存区。
应理解,第一驱动线程、第二驱动线程和第三驱动线程是电子设备的驱动层对应的线程。
应理解,第一驱动队列、第二驱动队列和第三驱动队列是电子设备的驱动层配置的。
上述技术方案摒弃了原有的烟囱式的数据包的传输方法,采用带反馈机制的数据包的 传输方法,即当第三驱动队列开始堆满数据包,第三驱动线程会停止从第二驱动队列中读取数据包,这样数据包便堆积在第二驱动队列。当第二驱动队列达到它的存储极限(即第二驱动队列已满),第一驱动线程会减慢从硬件缓存区中读取数据包至第一驱动队列中的速率,且第二驱动线程停止从第一驱动队列中读取数据包至第二驱动队列中。这样,当该电子设备的第一驱动队列、第二驱动队列和第三驱动队列中至少有一个队列处于饱和状态,该电子设备会启动流控机制如滑动窗口机制。由于该电子设备的至少有一个队列处于饱和状态,故在第一目标窗口上,存在该电子设备不会向发送端(另一电子设备)反馈相应数据包的反馈信息的情况,例如,反馈信息包括不确认(Negative acknowledgement,NACK)信息或确认(acknowledgement,ACK)信息。此时,该另一电子设备在第二目标窗口上,也不会检测到相应数据包的反馈信息,则另一电子设备会减缓向该电子设备发送数据包的速率,降低了另一电子设备的处理器的负载,节省功耗。同时,该电子设备接收数据包的速率也会随之减缓,进而该电子设备向硬件缓存区中存放的数据包的速率也减缓。这样,使得另一电子设备发送数据包的速率和该电子设备接收数据包的速率维持一个动态平衡的状态,进而降低了该电子设备的丢包率,提高了数据包传输的质量。
结合第一方面,在一种可能的实现方式中,所述方法还包括:在所述第一驱动线程从所述硬件缓存区中读取所述硬件缓存区中的数据包之前,通过所述第一驱动线程确定所述第二驱动队列未满;通过所述第一驱动线程将所述硬件缓存区中的p1个数据包读取至所述第一驱动队列;其中,所述p1个数据包的大小为Q,所述Q=min{Q1,Q2},所述Q1为所述硬件缓存区中所存储的数据包的大小,所述Q2为所述第一驱动队列剩余空间的大小。
结合第一方面,在一种可能的实现方式中,所述方法还包括:通过所述第二驱动线程将所述第一驱动队列中的p2个数据包读取至所述第二驱动队列;其中,所述p2个数据包的大小为W,所述W=min{W1,W2},所述W1为所述第一驱动队列中所存储的数据包的大小,所述W2为所述第二驱动队列剩余空间的大小。
结合第一方面,在一种可能的实现方式中,所述方法还包括:在所述第三驱动线程从所述第二驱动队列中读取所述第二驱动队列中的数据包之前,通过所述第三驱动线程确定所述第三驱动队列未满;通过所述第三驱动线程将所述第二驱动队列中的p3个数据包读取至所述第三驱动队列;其中,所述p3个数据包的大小为L,所述L=min{L1,L2},所述L1为所述第二驱动队列中所存储的数据包的大小,所述L2为所述第三驱动队列剩余空间的大小。
结合第一方面,在一种可能的实现方式中,所述方法还包括:所述方法还包括:通过内核线程将所述第三驱动队列中的数据包读取至内核层。
应理解,第内核线程是电子设备的内核层对应的线程。
上述技术方案摒弃了原有的烟囱式的数据包的传输方法,采用带反馈机制的数据包的传输方法,即当内核层从驱动层读取数据包的速率比驱动层中第三驱动队列开始累积数据包的速率慢时,且内核层从驱动层读取数据包的速率与驱动层中第三驱动队列开始累积数据包的速率差值达到目标预设值时,第三驱动队列开始堆满数据包,这样,第三驱动线程会停止从第二驱动队列中读取数据包,这样数据包便堆积在第二驱动队列。当第二驱动队列达到它的存储极限(即第二驱动队列已满),第一驱动线程会减慢从硬件缓存区中读取数据包至第一驱动队列中的速率,且第二驱动线程停止从第一驱动队列中读取数据包至第 二驱动队列中。进而可以动态地维持内核层从硬件层读取数据包的速率和驱动层中读取至第三驱动队列数据包的速率之间的平衡。
第二方面,提供了一种电子设备,所述电子设备包括:第一驱动线程,用于在所述第一驱动线程从硬件缓存区中读取所述硬件缓存区中的数据包之前,确定第二驱动队列已满,所述硬件缓存区中所存储的数据包为所述电子设备接收的来自另一电子设备的数据包;所述第一驱动线程,还用于将所述硬件缓存区中的n个数据包读取至第一驱动队列,所述n小于或等于预设值;第二驱动线程,用于在所述第二驱动线程从所述第一驱动队列中读取所述第一驱动队列中的数据包之前,通过所述第二驱动线程确定所述第二驱动队列已满;所述第二驱动线程,还用于不将所述第一驱动队列中的数据包读取至所述第二驱动队列;第三驱动线程,用于在所述第三驱动线程从所述第二驱动队列中读取所述第二驱动队列中的数据包之前,确定第三驱动队列已满;所述第三驱动线程,还用于不将所述第二驱动队列的数据包读取至所述第三驱动队列。
示例性地,所述预设值为1。
上述电子设备摒弃了原有的烟囱式的数据包的传输方法,采用带反馈机制的数据包的传输方法,即当第三驱动队列开始堆满数据包,第三驱动线程会停止从第二驱动队列中读取数据包,这样数据包便堆积在第二驱动队列。当第二驱动队列达到它的存储极限(即第二驱动队列已满),第一驱动线程会减慢从硬件缓存区中读取数据包至第一驱动队列中的速率,且第二驱动线程停止从第一驱动队列中读取数据包至第二驱动队列中。这样,当该电子设备的第一驱动队列、第二驱动队列和第三驱动队列中至少有一个队列处于饱和状态,
该电子设备会启动流控机制如滑动窗口机制。由于该电子设备的至少有一个队列处于饱和状态,故在第一目标窗口上,存在该电子设备不会向发送端(另一电子设备)反馈相应数据包的反馈信息的情况,例如,反馈信息包括不确认(Negative acknowledgement,NACK)信息或确认(acknowledgement,ACK)信息。此时,该另一电子设备在第二目标窗口上,也不会检测到相应数据包的反馈信息,则另一电子设备会减缓向该电子设备发送数据包的速率,降低了另一电子设备的处理器的负载,节省功耗。同时,该电子设备接收数据包的速率也会随之减缓,进而该电子设备向硬件缓存区中存放的数据包的速率也减缓。这样,使得另一电子设备发送数据包的速率和该电子设备接收数据包的速率维持一个动态平衡的状态,进而降低了该电子设备的丢包率,提高了数据包传输的质量。
结合第二方面,在一种可能的实现方式中,所述第一驱动线程还用于:在所述第一驱动线程从所述硬件缓存区中读取所述硬件缓存区中的数据包之前,确定所述第二驱动队列未满;将所述硬件缓存区中的p1个数据包读取至所述第一驱动队列;其中,所述p1个数据包的大小为Q,所述Q=min{Q1,Q2},所述Q1为所述硬件缓存区中所存储的数据包的大小,所述Q2为所述第一驱动队列剩余空间的大小。
结合第二方面,在一种可能的实现方式中,所述第二驱动线程还用于:将所述第一驱动队列中的p2个数据包读取至所述第二驱动队列;其中,所述p2个数据包的大小为W,所述W=min{W1,W2},所述W1为所述第一驱动队列中所存储的数据包的大小,所述W2为所述第二驱动队列剩余空间的大小。
结合第二方面,在一种可能的实现方式中,所述第三驱动线程还用于:在所述第三驱动线程从所述第二驱动队列中读取所述第二驱动队列中数据包之前,确定所述第三驱动队 列未满;将所述第二驱动队列中的p3个数据包读取至所述第三驱动队列;其中,所述p3个数据包的大小为L,所述L=min{L1,L2},所述L1为所述第二驱动队列中所存储的数据包的大小,所述L2为所述第三驱动队列剩余空间的大小。
结合第二方面,在一种可能的实现方式中,所述电子设备还包括:内核线程,用于将所述第三驱动队列中的数据包读取至内核层。
上述电子设备摒弃了原有的烟囱式的数据包的传输方法,采用带反馈机制的数据包的传输方法,即当内核层从驱动层读取数据包的速率比驱动层中第三驱动队列开始累积数据包的速率慢时,且内核层从驱动层读取数据包的速率与驱动层中第三驱动队列开始累积数据包的速率差值达到目标预设值时,第三驱动队列开始堆满数据包,这样,第三驱动线程会停止从第二驱动队列中读取数据包,这样数据包便堆积在第二驱动队列。当第二驱动队列达到它的存储极限(即第二驱动队列已满),第一驱动线程会减慢从硬件缓存区中读取数据包至第一驱动队列中的速率,且第二驱动线程停止从第一驱动队列中读取数据包至第二驱动队列中。进而可以动态地维持内核层从硬件层读取数据包的速率和驱动层中读取至第三驱动队列数据包的速率之间的平衡。
第三方面,提供了一种电子设备,包括至少一个存储器和至少一个处理器,所述至少一个存储器用于存储程序,所述至少一个处理器用于运行所述程序,以实现如第一方面及其任一种可能的实现方式中所述的数据包传输的方法。
第四方面,提供了一种计算机可读存储介质,包括计算机指令,当所述计算机指令在电子设备上运行时,使得所述电子设备执行如第一方面及其任一种可能的实现方式中所述的数据包传输的方法。
第五方面,提供了一种芯片,包括至少一个处理器和接口电路,所述接口电路用于为所述至少一个处理器提供程序指令或者数据,所述至少一个处理器用于执行所述程序指令,以实现如第一方面及其任一种可能的实现方式中所述的数据包传输的方法。
第六方面,提供一种计算机程序产品,包括计算机指令,当所述计算机指令在电子设备上运行时,使得如第一方面及其任一种可能的实现方式中所述的数据包传输的方法被执行。
附图说明
图1是本申请实施例提供的一例电子设备的结构示意图。
图2是本申请实施例提供的一例软件结构框图。
图3是本申请实施例提供的一例数据包传输的路径示意图。
图4是本申请实施例提供的一例数据包在传输过程中第一驱动线程执行的步骤的示意性流程图。
图5是本申请实施例提供的一例数据包在传输过程中第二驱动线程执行的步骤的示意性流程图。
图6是本申请实施例提供的一例数据包在传输过程中第三驱动线程执行的步骤的示意性流程图。
图7是本申请实施例提供的一例数据包传输的方法的示意性流程图。
图8是本申请实施例提供的另一例电子设备的结构示意图。
具体实施方式
下面将结合附图,对本申请中的技术方案进行描述。
本申请实施例提供的数据包传输的方法可以应用于基于UDP协议的电子设备中或基于TCP协议的电子设备中。
UDP协议是一种无连接的传输层通信协议。UDP协议并不提供数据传送的保证机制。如果在从发送方到接收方的传递过程中出现数据包的丢失,协议本身并不能做出任何检测或提示。因此,通常人们把UDP协议称为不可靠的传输协议。
TCP协议是一种面向连接的、可靠的、基于字节流的传输层通信协议。TCP协议包含了专门的传递保证机制,当数据接收方收到发送方传来的信息时,会自动向发送方发出确认消息;发送方只有在接收到该确认消息之后才继续传送其它信息,否则将一直等待直到收到确认信息为止。
本申请实施例中的电子设备可以指用户设备、接入终端、用户单元、用户站、移动站、移动台、远方站、远程终端、移动设备、用户终端、终端、无线通信设备、用户代理或用户装置。终端设备还可以是蜂窝电话、无绳电话、会话启动协议(Session Initiation Protocol,SIP)电话、无线本地环路(Wireless Local Loop,WLL)站、个人数字处理(Personal Digital Assistant,PDA)、具有无线通信功能的手持设备、计算设备或连接到无线调制解调器的其它处理设备、车载设备、可穿戴设备,未来5G网络中的终端设备或者未来演进的公用陆地移动通信网络(Public Land Mobile Network,PLMN)中的终端设备等,本申请实施例对此并不限定。
示例性的,图1示出了电子设备100的结构示意图。电子设备100可以包括处理器110,外部存储器接口120,内部存储器121,通用串行总线(universal serial bus,USB)接口130,充电管理模块140,电源管理模块141,电池142,天线1,天线2,移动通信模块150,无线通信模块160,音频模块170,扬声器170A,受话器170B,麦克风170C,耳机接口170D,传感器模块180,按键190,马达191,指示器192,摄像头193,显示屏194,以及用户标识模块(subscriber identification module,SIM)卡接口195等。其中传感器模块180可以包括压力传感器180A,陀螺仪传感器180B,气压传感器180C,磁传感器180D,加速度传感器180E,距离传感器180F,接近光传感器180G,指纹传感器180H,温度传感器180J,触摸传感器180K,环境光传感器180L,骨传导传感器180M等。
可以理解的是,本申请实施例示意的结构并不构成对电子设备100的具体限定。在本申请另一些实施例中,电子设备100可以包括比图示更多或更少的部件,或者组合某些部件,或者拆分某些部件,或者不同的部件布置。图示的部件可以以硬件,软件或软件和硬件的组合实现。
处理器110可以包括一个或多个处理单元,例如:处理器110可以包括应用处理器(application processor,AP),调制解调处理器,图形处理器(graphics processing unit,GPU),图像信号处理器(image signal processor,ISP),控制器,存储器,视频编解码器,数字信号处理器(digital signal processor,DSP),基带处理器,和/或神经网络处理器(neural-network processing unit,NPU)等。其中,不同的处理单元可以是独立的器件,也可以集成在一个或多个处理器中。
其中,控制器可以是电子设备100的神经中枢和指挥中心。控制器可以根据指令操作码和时序信号,产生操作控制信号,完成取指令和执行指令的控制。
处理器110中还可以设置存储器,用于存储指令和数据。在一些实施例中,处理器110中的存储器为高速缓冲存储器。该存储器可以保存处理器110刚用过或循环使用的指令或数据。如果处理器110需要再次使用该指令或数据,可从所述存储器中直接调用。避免了重复存取,减少了处理器110的等待时间,因而提高了系统的效率。
在一些实施例中,处理器110可以包括一个或多个接口。接口可以包括集成电路(inter-integrated circuit,I2C)接口,集成电路内置音频(inter-integrated circuit sound,I2S)接口,脉冲编码调制(pulse code modulation,PCM)接口,通用异步收发传输器(universal asynchronous receiver/transmitter,UART)接口,移动产业处理器接口(mobile industry processor interface,MIPI),通用输入输出(general-purpose input/output,GPIO)接口,用户标识模块(subscriber identity module,SIM)接口,和/或通用串行总线(universal serial bus,USB)接口等。
可以理解的是,本申请实施例示意的各模块间的接口连接关系,只是示意性说明,并不构成对电子设备100的结构限定。在本申请另一些实施例中,电子设备100也可以采用上述实施例中不同的接口连接方式,或多种接口连接方式的组合。
电子设备100的无线通信功能可以通过天线1,天线2,移动通信模块150,无线通信模块160,调制解调处理器以及基带处理器等实现。
天线1和天线2用于发射和接收电磁波信号。电子设备100中的每个天线可用于覆盖单个或多个通信频带。不同的天线还可以复用,以提高天线的利用率。
移动通信模块150可以提供应用在电子设备100上的包括2G/3G/4G/5G等无线通信的解决方案。移动通信模块150可以包括至少一个滤波器,开关,功率放大器,低噪声放大器(low noise amplifier,LNA)等。移动通信模块150可以由天线1接收电磁波,并对接收的电磁波进行滤波,放大等处理,传送至调制解调处理器进行解调。移动通信模块150还可以对经调制解调处理器调制后的信号放大,经天线1转为电磁波辐射出去。在一些实施例中,移动通信模块150的至少部分功能模块可以被设置于处理器110中。在一些实施例中,移动通信模块150的至少部分功能模块可以与处理器110的至少部分模块被设置在同一个器件中。
在一些实施例中,电子设备100的天线1和移动通信模块150耦合,天线2和无线通信模块160耦合,使得电子设备100可以通过无线通信技术与网络以及其他设备通信。所述无线通信技术可以包括全球移动通讯系统(global system for mobile communications,GSM),通用分组无线服务(general packet radio service,GPRS),码分多址接入(code division multiple access,CDMA),宽带码分多址(wideband code division multiple access,WCDMA),时分码分多址(time-division code division multiple access,TD-SCDMA),长期演进(long term evolution,LTE),BT,GNSS,WLAN,NFC,FM,和/或IR技术等。所述GNSS可以包括全球卫星定位系统(global positioning system,GPS),全球导航卫星系统(global navigation satellite system,GLONASS),北斗卫星导航系统(beidou navigation satellite system,BDS),准天顶卫星系统(quasi-zenith satellite system,QZSS)和/或星基增强系统(satellite based augmentation systems,SBAS)。
电子设备100通过GPU,显示屏194,以及应用处理器等实现显示功能。GPU为图像处理的微处理器,连接显示屏194和应用处理器。GPU用于执行数学和几何计算,用于图形渲染。处理器110可包括一个或多个GPU,其执行程序指令以生成或改变显示信息。
外部存储器接口120可以用于连接外部存储卡,例如Micro SD卡,实现扩展电子设备100的存储能力。外部存储卡通过外部存储器接口120与处理器110通信,实现数据存储功能。例如将音乐,视频等文件保存在外部存储卡中。
内部存储器121可以用于存储计算机可执行程序代码,所述可执行程序代码包括指令。处理器110通过运行存储在内部存储器121的指令,从而执行电子设备100的各种功能应用以及数据处理。内部存储器121可以包括存储程序区和存储数据区。其中,存储程序区可存储操作系统,至少一个功能所需的应用程序(比如声音播放功能,图像播放功能等)等。存储数据区可存储电子设备100使用过程中所创建的数据(比如音频数据,电话本等)等。此外,内部存储器121可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件,闪存器件,通用闪存存储器(universal flash storage,UFS)等。
电子设备100可以通过音频模块170,扬声器170A,受话器170B,麦克风170C,耳机接口170D,以及应用处理器等实现音频功能。例如音乐播放,录音等。
电子设备100的软件系统可以采用分层架构,事件驱动架构,微核架构,微服务架构,或云架构。本申请实施例以分层架构的软件系统为例,示例性说明电子设备100的软件结构。
图2是本申请实施例的电子设备100的软件结构框图。分层架构将软件分成若干个层,每一层都有清晰的角色和分工。层与层之间通过软件接口通信。
示例性地,如图2所示,软件系统分为四层,从上至下分别为应用层、内核层、驱动层和硬件层。
硬件层可以包括一个或多个芯片,该一个或多个芯片可以存储自身产生的数据包和/或接收的来自其他设备发送的数据包。例如,该芯片可以包括但不限于是无线保真(wireless fidelity,WiFi)芯片。
在一些实施例中,硬件层可以在硬件层的芯片中申请缓存区(buffer),并将接收的数据包缓存至该硬件层申请的缓存区。
驱动层可以从硬件层的一个或多个芯片中读取数据包。
在一些实施例中,驱动层也可以在驱动层申请缓存区,并将从硬件层的缓存区中读取的数据包缓存到其在驱动层申请的buffer中。其中,该驱动层申请的缓存区的大小是固定值。
内核层可以从驱动层中读取数据包。例如,内核层从驱动层在驱动层中申请的buffer中读取其缓存的数据包。
应用层可以通过内核层与应用层之间的接口(socket)从内核层中读取数据包。
根据图2所示的电子设备100的软件架构示意图可知,硬件层收到其他设备发送的数据包后,经过层层转发到达上层应用层。而数据包的每一次拷贝和转发都有可能带来数据包的丢失。尤其在数据包从驱动层转发到内核层中的过程中,若内核层从驱动层读取数据 包的速率小于硬件层接收数据包的速率,使得内核层来不及将数据包从驱动层申请的buffer中取出,而且该buffer的大小是固定值,这样就会造成buffer的存储空间被数据包占满,即该buffer中没有空闲空间供其他待转发的数据包存放。此时,这种数据包从驱动层转发至内核层的传输方法可以称为烟囱式的传输方法,即只有buffer有空闲空间时,才可以存储新的数据包。这样就会造成驱动层从硬件层读取的新的数据包会被丢弃,造成丢包。
对于UDP协议的电子设备,一般基于UDP协议传输的数据例如包括多媒体数据,虽然丢失的多媒体数据不会对多媒体的播放造成大的影响,但是其会影响用户的观看效果、造成用户体验较差。
对于TCP协议的电子设备,一般基于TCP协议传输的数据都是比较重要的数据,因此,若存在数据丢失,需要通过重传解决丢失的数据。这样虽然解决了丢包率高的问题,但是却增加了重传过程,增加了电子设备的功耗,降低了通信的实际带宽。
因此,本申请实施例提供了一种数据包传输的方法,该方法可以应用于如图1和/或图2所示的电子设备。该方法摒弃了原有的烟囱式的数据包的传输方法,采用带反馈机制的数据包的传输方法,可以使得发送端发送数据包的速率和接收端接收数据包的速率维持一个动态平衡的状态,进而降低了接收端的丢包率,提高了数据包传输的质量。
本申请实施例中,涉及的数据包可以是该电子设备接收的来自其他电子设备发送的数据包。本申请实施例对该电子设备接收的其他电子设备发送的数据包的通信方式不作限定。例如,电子设备可以通过WiFi接收其他电子设备发送的数据包。
示例性地,本申请实施例中,可以通过该电子设备的应用层来指示该电子设备需要接收的数据包的具体内容。
下面结合附图,对本申请实施例提供的数据包的传输方法进行详细的描述。
在本申请实施例中,对数据包在驱动层缓存的次数不作限定。
以下,以驱动层将待读取的数据包经过了三次读取缓存,然后读取至内核层为例,对本申请实施例提供的数据包的传输方法进行详细的描述。驱动层在驱动层申请了三个缓存区,例如第一缓存区、第二缓存区和第三缓存区,此时,第一缓存区为第一驱动队列的存储空间,第二缓存区为第二驱动队列的存储空间,第三缓存区为第三驱动队列的存储空间。
在本申请实施例中,对驱动层内部读取数据包的线程的个数不作限定。
以下,以读取数据包的线程为三个线程(例如,第一驱动线程、第二驱动线程和第三驱动线程)为例,对本申请实施例提供的数据包的传输方法进行详细的描述。
在一些实施例中,驱动层读取数据包的线程可以仅包括下文所述的第一驱动线程和第二驱动线程。第一驱动线程执行下文第一驱动线程执行的各个步骤,第二驱动线程执行下文第二驱动线程执行的各个步骤。此时,内核线程是从第二驱动队列中读取数据包至内核层的内核缓存区。
在一些实施例中,驱动层读取数据包的线程除了包括下文所述的第一驱动线程、第二驱动线程和第三驱动线程外,还包括其他线程。若其他线程在第一驱动线程之后,那么其他线程的功能可以与第二驱动线程或第三驱动线程的功能类似。具体过程可以参考第二驱动线程或第三驱动线程执行的各个步骤的相关描述,这里不再赘述。若其他线程在第一驱动线程之前,那么其他线程的功能可以与下文所述的第一驱动线程的功能类似。具体过程 可以参考下文第一驱动线程执行的各个步骤的相关描述,这里不再赘述。
应理解,第一驱动线程、第二驱动线程和第三驱动线程可以是同一个线程,也可以是不同的线程,本申请实施例对此不作限定。
例如,图3为本申请实施例提供的数据包传输的路径示意图。如图3所示,硬件层包括硬件缓存区。驱动层对应的线程包括三个线程,例如第一驱动线程、第二驱动线程和第三驱动线程。内核层包括内核缓存区,内核层对应的线程包括内核线程。其中,第一驱动线程用于将硬件缓冲区中的数据包读取至驱动层的第一驱动队列,第二驱动线程用于将第一驱动队列中的数据包读取至驱动层的第二驱动队列,第三驱动线程用于将第二驱动队列中的数据包读取至第三驱动队列。内核线程用于将第三驱动队列中的数据包读取至内核层的内核缓存区中。
以下,结合图4至图6,对本申请实施例提供的数据包传输的方法中各个线程所执行的步骤描述。
图4为本申请实施例提供的一例数据包在传输过程中第一驱动线程执行的步骤的示意性流程图。例如,如图4所示,第一驱动线程执行的步骤包括S210至S230。以下,将具体介绍S210至S230。
S210,在第一驱动线程从硬件缓存区读取硬件缓存区中的数据包至第一驱动队列之前,第一驱动线程确定第二驱动队列是否已满。
在一些实施例中,第一驱动线程可以通过其他线程,确定第二驱动队列是否已满。例如,在210之前,第二驱动线程已执行了S310,即第二驱动线程已知道第二驱动队列是否已满,此时,第二驱动线程可以告知第一驱动线程:第二驱动队列是否已满。在另一些实施例中,第一驱动线程可以自己确定第二驱动队列是否已满。
在第二驱动队列未满的情况下,执行S220。在第二驱动队列已满的情况下,执行S230。
在一些实施例中,S210还可以用S210’代替。
S210’,在第一驱动线程从硬件缓存区读取硬件缓存区中的数据包至第一驱动队列之前,第一驱动线程确定第一驱动队列是否已满。
若在第一驱动队列未满的情况下,执行S220。在第一驱动队列已满的情况下,执行S230;或者,第一驱动线程不进行数据包读取,即第一驱动线程不会将硬件缓存区中的数据包读取至第一驱动队列。
S220,第一驱动线程从硬件缓存区中p1个数据包读取至第一驱动队列。
示例性地,Q=min{Q1,Q2},其中,Q为p1个数据包的大小,Q1为硬件缓存区中所存储的数据包的大小,Q2为第一驱动队列剩余空间的大小。
例如,若硬件缓存区中存储的数据包的大小为20k(Q1=20k),第一驱动队列剩余空间的大小为10k(Q2=20k),则Q=min{20k,10k}=10k。即第一驱动线程最多只能将硬件缓存区中10k的数据包读取至第一驱动队列。此时,S220中所述的p1个数据包的大小为10k。
S230,第一驱动线程将硬件缓存区中n个数据包读取至第一驱动队列。其中,n小于或等于预设值。
在一些实施例中,为了触发后续线程(例如,第二驱动线程)执行相应的任务,该预设值可以等于1。
图5为本申请实施例提供的一例数据包在传输过程中第二驱动线程执行的步骤的示意性流程图。例如,如图5所示,第二驱动线程执行的步骤包括S310至S330。以下,将具体介绍S310至S330。
S310,在第二驱动线程从第一驱动队列读取第一驱动队列中的数据包至第二驱动队列之前,第二驱动线程确定第二驱动队列是否已满。
在一些实施例中,第二驱动线程可以通过其他线程,确定第二驱动队列是否已满。例如,在310之前,第一驱动线程已执行了S210,即第一驱动线程已知道第二驱动队列是否已满,此时,第一驱动线程可以告知第二驱动线程:第二驱动队列是否已满。在另一些实施例中,第二驱动线程可以自己确定第二驱动队列是否已满。
在第二驱动队列未满的情况下,执行S320。在第二驱动队列已满的情况下,执行S330。
S320,第二驱动线程将第一驱动队列的p2个数据包读取至第二驱动队列。
示例性地,W=min{W1,W2},其中,W为p2个数据包的大小,W1为第一驱动队列中所存储的数据包的大小,W2为第二驱动队列剩余空间的大小。
例如,若第一驱动队列中所存储的数据包的大小为5k(W1=20k),第二驱动队列剩余空间的大小为20k(W2=20k),则W=min{20k,20k}=20k。即第二驱动线程可以将第一驱动队列中所储存的所有数据包(20k的数据包)读取至第二驱动队列。此时,S320中所述的p2个数据包的大小为20k。
S330,第二驱动线程不进行数据包读取。即第二驱动线程不会将第一驱动队列中的数据包读取至第二驱动队列。
图6为本申请实施例提供的一例数据包在传输过程中第三驱动线程执行的步骤的示意性流程图。例如,如图6所示,第三驱动线程执行的步骤包括S410至S430。以下,将具体介绍S410至S430。
S410,在第三驱动线程从第二驱动队列转发(从第二队列读到第三队列)第二驱动队列中的数据包至第三驱动队列之前,第三驱动线程确定第三驱动队列是否已满。
在第三驱动队列未满的情况下,执行S420。在第三驱动队列已满的情况下,执行S430。
S420,第三驱动线程将第二驱动队列中的p3个数据包读取至第三驱动队列。
在一些实施例中,L=min{L1,L2},其中,L为p3个数据包的大小,L1为第二驱动队列中所存储的数据包的大小,L2为第三驱动队列剩余空间的大小。
示例性地,若第二驱动队列中所存储的数据包的大小为10k(L1=10k),第三驱动队列剩余空间的大小为20k(L2=20k),则L=min{10k,20k}=10k。即第三驱动线程可以将第二驱动队列中的所有数据包(10k的数据包)读取至第三驱动队列。此时,S420中所述的p3个数据包的大小为10k。
S430,第三驱动线程不进行数据包的读取。即第三驱动线程不会将第二驱动队列中的数据包读取至第三驱动队列。
通过上文所述的各线程(第一驱动线程、第二驱动线程和第三驱动线程)的执行步骤所实现的本申请实施例提供的数据包传输的方法,摒弃了原有的烟囱式的数据包的传输方法,采用带反馈机制的数据包的传输方法,即当第三驱动队列开始堆满数据包,第三驱动线程会停止从第二驱动队列中读取数据包,这样数据包便堆积在第二驱动队列。当第二驱动队列达到它的存储极限(即第二驱动队列已满),第一驱动线程会减慢从硬件缓存区中 读取数据包至第一驱动队列中的速率,且第二驱动线程停止从第一驱动队列中读取数据包至第二驱动队列中。这样,当该电子设备的第一驱动队列、第二驱动队列和第三驱动队列中至少有一个队列处于饱和状态,该电子设备会启动流控机制如滑动窗口机制。由于该电子设备的至少有一个队列处于饱和状态,故在第一目标窗口上,存在该电子设备不会向发送端(另一电子设备)反馈相应数据包的反馈信息的情况,例如,反馈信息包括不确认(Negative acknowledgement,NACK)信息或确认(acknowledgement,ACK)信息。此时,该另一电子设备在第二目标窗口上,也不会检测到相应数据包的反馈信息,则另一电子设备会减缓向该电子设备发送数据包的速率,降低了另一电子设备的处理器的负载,节省功耗。同时,该电子设备接收数据包的速率也会随之减缓,进而该电子设备向硬件缓存区中存放的数据包的速率也减缓。这样,使得另一电子设备发送数据包的速率和该电子设备接收数据包的速率维持一个动态平衡的状态,进而降低了该电子设备的丢包率,提高了数据包传输的质量。进而,对于UDP协议的电子设备,可以减少丢包率,提高用户体验;对于TCP协议的电子设备,可以减少重传次数,降低了电子设备的功耗,降低了通信的实际带宽。
此外,内核层的内核线程还可以将第三驱动队列中的p4个数据包读取至内核层的内核缓存区。
在一些实施例中,K=min{K1,K2},其中,K为p4个数据包的大小,K1为第三驱动队列中所存储的数据包的大小,K2为内核层在内核层所申请的内核缓存区剩余空间的大小。
进一步地,内核层还可以通过与应用层之间的socket,向应用层发送数据读取请求信息。这样,在应用层接收到该数据读取请求信息后,会通过相应的socket,从内核层读取数据包。
在一些实施例中,应用层还可以将读取的数据包进行处理后,显示在电子设备上。
通过上述数据包的传输的方法,当内核层从驱动层读取数据包的速率比驱动层中第三驱动队列开始累积数据包的速率慢时,且内核层从驱动层读取数据包的速率与驱动层中第三驱动队列开始累积数据包的速率差值达到目标预设值时,第三驱动队列开始堆满数据包,这样,第三驱动线程会停止从第二驱动队列中读取数据包,这样数据包便堆积在第二驱动队列。当第二驱动队列达到它的存储极限(即第二驱动队列已满),第一驱动线程会减慢从硬件缓存区中读取数据包至第一驱动队列中的速率,且第二驱动线程停止从第一驱动队列中读取数据包至第二驱动队列中。进而可以动态地维持内核层从硬件层读取数据包的速率和驱动层中读取至第三驱动队列数据包的速率之间的平衡。此外,对于UDP协议的电子设备,可以减少丢包率,提高用户体验;对于TCP协议的电子设备,可以减少重传次数,降低了电子设备的功耗,降低了通信的实际带宽。
图7是本申请实施例提供的一种数据包传输的方法的示意性流程图。该方法可以应用于电子设备。
例如,如图7所示,该方法可以包括:
S510,在第一驱动线程从硬件缓存区中读取所述硬件缓存区中的数据包之前,通过所述第一驱动线程确定第二驱动队列已满。其中,所述硬件缓存区中所存储的数据包为所述电子设备接收的来自另一电子设备的数据包。
S520,通过所述第一驱动线程将所述硬件缓存区中的n个数据包读取至第一驱动队列。其中,所述n小于或等于预设值。
S530,在第二驱动线程从所述第一驱动队列中读取所述第一驱动队列中的数据包之前,通过所述第二驱动线程确定所述第二驱动队列已满。
S540,通过所述第二驱动线程不将所述第一驱动队列中的数据包读取至所述第二驱动队列。
S550,在第三驱动线程从所述第二驱动队列中读取所述第二驱动队列中的数据包之前,通过所述第三驱动线程确定第三驱动队列已满。
S560,通过所述第三驱动线程不将所述第二驱动队列的数据包读取至所述第三驱动队列。
应理解,在S510至S560中,S520在S510之后执行,S540至S530之后执行,S560在S570之后执行,其他步骤之间无先后执行顺序之分,其他步骤也可以同时执行。
上文结合图3和图7,详细描述了本申请提供数据包传输的方法,下面将结合图8,详细描述本申请提供的电子设备的实施例。
图8为本申请的实施例提供的一例电子设备的结构示意图。
例如,如图8所示,该电子设备600包括:至少一个处理器610和和至少一个存储器620。所述至少一个存储器620用于存储程序,所述至少一个处理器610用于运行所述程序,以实现前文所述的数据包传输的方法。
示例的,存储器620可以是只读存储器(read only memory,ROM),静态存储设备,动态存储设备或者随机存取存储器(random access memory,RAM)。存储器620可以存储程序,当存储器620中存储的程序被处理器610执行时,处理器610可以用于执行本申请实施例提供的数据包传输的方法的各个步骤。也就是说,处理器610可以从存储器620获取存储的指令,以执行本申请实施例提供的数据包传输的方法的各个步骤。
示例的,处理器610可以采用通用的中央处理器(central processing unit,CPU),微处理器,专用集成电路(application specific integrated circuit,ASIC),图形处理器(graphic processing unit,GPU)或者一个或多个集成电路,用于执行相关程序,以实现本申请实施例提供的电子设备600中各个线程执行的功能。
示例的,处理器610还可以是一种集成电路芯片,具有信号的处理能力。在实现过程中,本申请实施例提供的数据包的传输方法的各个步骤可以通过处理器中的硬件的集成逻辑电路或者软件形式的指令完成。
示例的,处理器610还可以是通用处理器、数字信号处理器(digital signal processing,DSP)、专用集成电路(ASIC)、FPGA或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件。可以实现或者执行本申请实施例提供的数据包传输的方法的各个步骤。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。结合本申请实施例所提供的数据包传输的方法的各个步骤可以直接体现为硬件译码处理器执行完成,或者用译码处理器中的硬件及软件模块组合执行完成。软件模块可以位于随机存储器,闪存、只读存储器,可编程只读存储器或者电可擦写可编程存储器、寄存器等本领域成熟的存储介质中。该存储介质位于存储器,处理器读取存储器中的信息,结合其硬件完成本申请实施例电子设备600中任一种可实现的方式中包括的单元所需执行的功能,或者执行 本申请实施例提供的数据包传输的方法的各个步骤。
示例的,在一些实施例中,该电子设备600还包括接口电路。该接口电路可以使用例如但不限于收发器一类的收发装置,来实现电子设备与其他设备或通信网络之间的通信。该接口电路例如还可以是通信接口。
上述各个附图对应的流程的描述各有侧重,某个流程中没有详述的部分,可以参见其他流程的相关描述。
本申请实施例还提供一种计算机可读存储介质,所述计算机可读存储介质具有程序指令,当所述程序指令被直接或者间接执行时,使得前文中的数据包的传输方法得以实现。
本申请实施例还提供一种芯片,包括至少一个处理器和接口电路,所述接口电路用于为所述至少一个处理器提供程序指令或者数据,所述至少一个处理器用于执行所述程序指令,使得前文中的数据包的传输方法得以实现。
本申请实施例还提供了一种包含指令的计算机程序产品,当其在计算设备上运行时,使得计算设备执行前文中的数据包的传输方法,或者使得所述计算设备实现前文中的电子设备的功能。
上述实施例,可以全部或部分地通过软件、硬件、固件或其他任意组合来实现。当使用软件实现时,上述实施例可以全部或部分地以计算机程序产品的形式实现。所述计算机程序产品包括一个或多个计算机指令或计算机程序。在计算机上加载或执行所述计算机指令或计算机程序时,全部或部分地产生按照本申请实施例所述的流程或功能。所述计算机可以为通用计算机、专用计算机、计算机网络、或者其他可编程装置。所述计算机指令可以存储在计算机可读存储介质中,或者从一个计算机可读存储介质向另一个计算机可读存储介质传输,例如,所述计算机指令可以从一个网站站点、计算机、服务器或数据中心通过有线(例如红外、无线、微波等)方式向另一个网站站点、计算机、服务器或数据中心进行传输。所述计算机可读存储介质可以是计算机能够存取的任何可用介质或者是包含一个或多个可用介质集合的服务器、数据中心等数据存储设备。所述可用介质可以是磁性介质(例如,软盘、硬盘、磁带)、光介质(例如,DVD)、或者半导体介质。半导体介质可以是固态硬盘。
本领域普通技术人员可以意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、或者计算机软件和电子硬件的结合来实现。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的系统、装置和单元的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
在本申请所提供的几个实施例中,应该理解到,所揭露的系统、装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。
所述功能如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本申请各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(Read-Only Memory,ROM)、随机存取存储器(Random Access Memory,RAM)、磁碟或者光盘等各种可以存储程序代码的介质。
以上所述,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以所述权利要求的保护范围为准。

Claims (15)

  1. 一种数据包传输的方法,其特征在于,所述方法应用于电子设备,所述方法包括:
    在第一驱动线程从硬件缓存区中读取所述硬件缓存区中的数据包之前,通过所述第一驱动线程确定第二驱动队列已满,所述硬件缓存区中所存储的数据包为所述电子设备接收的来自另一电子设备的数据包;
    通过所述第一驱动线程将所述硬件缓存区中的n个数据包读取至第一驱动队列,所述n小于或等于预设值;
    在第二驱动线程从所述第一驱动队列中读取所述第一驱动队列中的数据包之前,通过所述第二驱动线程确定所述第二驱动队列已满;
    通过所述第二驱动线程不将所述第一驱动队列中的数据包读取至所述第二驱动队列;
    在第三驱动线程从所述第二驱动队列中读取所述第二驱动队列中的数据包之前,通过所述第三驱动线程确定第三驱动队列已满;
    通过所述第三驱动线程不将所述第二驱动队列的数据包读取至所述第三驱动队列。
  2. 根据权利要求1所述的方法,其特征在于,所述预设值为1。
  3. 根据权利要求1或2所述的方法,其特征在于,所述方法还包括:
    在所述第一驱动线程从所述硬件缓存区中读取所述硬件缓存区中的数据包之前,通过所述第一驱动线程确定所述第二驱动队列未满;
    通过所述第一驱动线程将所述硬件缓存区中的p1个数据包读取至所述第一驱动队列;
    其中,所述p1个数据包的大小为Q,所述Q=min{Q1,Q2},所述Q1为所述硬件缓存区中所存储的数据包的大小,所述Q2为所述第一驱动队列剩余空间的大小。
  4. 根据权利要求1至3中任一项所述的方法,其特征在于,所述方法还包括:
    在所述第二驱动线程从所述第一驱动队列中读取所述第一驱动队列中的数据包之前,通过所述第二驱动线程确定所述第一驱动队列未满;
    通过所述第二驱动线程将所述第一驱动队列中的p2个数据包读取至所述第二驱动队列;
    其中,所述p2个数据包的大小为W,所述W=min{W1,W2},所述W1为所述第一驱动队列中所存储的数据包的大小,所述W2为所述第二驱动队列剩余空间的大小。
  5. 根据权利要求1至4中任一项所述的方法,其特征在于,所述方法还包括:
    在所述第三驱动线程从所述第二驱动队列中读取所述第二驱动队列中的数据包之前,通过所述第三驱动线程确定所述第三驱动队列未满;
    通过所述第三驱动线程将所述第二驱动队列中的p3个数据包读取至所述第三驱动队列;
    其中,所述p3个数据包的大小为L,所述L=min{L1,L2},所述L1为所述第二驱动队列中所存储的数据包的大小,所述L2为所述第三驱动队列剩余空间的大小。
  6. 根据权利要求1至5中任一项所述的方法,其特征在于,所述方法还包括:
    通过内核线程将所述第三驱动队列中的数据包读取至内核层。
  7. 一种电子设备,其特征在于,所述电子设备包括:
    第一驱动线程,用于在所述第一驱动线程从硬件缓存区中读取所述硬件缓存区中的数据包之前,确定第二驱动队列已满,所述硬件缓存区中所存储的数据包为所述电子设备接收的来自另一电子设备的数据包;
    所述第一驱动线程,还用于将所述硬件缓存区中的n个数据包读取至第一驱动队列,所述n小于或等于预设值;
    第二驱动线程,用于在所述第二驱动线程从所述第一驱动队列中读取所述第一驱动队列中的数据包之前,通过所述第二驱动线程确定所述第二驱动队列已满;
    所述第二驱动线程,还用于不将所述第一驱动队列中的数据包读取至所述第二驱动队列;
    第三驱动线程,用于在所述第三驱动线程从所述第二驱动队列中读取所述第二驱动队列中的数据包之前,确定第三驱动队列已满;
    所述第三驱动线程,还用于不将所述第二驱动队列的数据包读取至所述第三驱动队列。
  8. 根据权利要求7所述的电子设备,其特征在于,所述预设值为1。
  9. 根据权利要求7或8所述的电子设备,其特征在于,所述第一驱动线程还用于:
    在所述第一驱动线程从所述硬件缓存区中读取所述硬件缓存区中的数据包之前,确定所述第二驱动队列未满;
    将所述硬件缓存区中的p1个数据包读取至所述第一驱动队列;
    其中,所述p1个数据包的大小为Q,所述Q=min{Q1,Q2},所述Q1为所述硬件缓存区中所存储的数据包的大小,所述Q2为所述第一驱动队列剩余空间的大小。
  10. 根据权利要求7至9中任一项所述的电子设备,其特征在于,所述第二驱动线程还用于:
    在所述第二驱动线程从所述第一驱动队列中读取所述第一驱动队列中的数据包之前,通过所述第二驱动线程确定所述第一驱动队列未满;
    将所述第一驱动队列中的p2个数据包读取至所述第二驱动队列;
    其中,所述p2个数据包的大小为W,所述W=min{W1,W2},所述W1为所述第一驱动队列中所存储的数据包的大小,所述W2为所述第二驱动队列剩余空间的大小。
  11. 根据权利要求7至10中任一项所述的电子设备,其特征在于,所述第三驱动线程还用于:
    在所述第三驱动线程从所述第二驱动队列中读取所述第二驱动队列中的数据包之前,确定所述第三驱动队列未满;
    将所述第二驱动队列中的p3个数据包读取至所述第三驱动队列;
    其中,所述p3个数据包的大小为L,所述L=min{L1,L2},所述L1为所述第二驱动队列中所存储的数据包的大小,所述L2为所述第三驱动队列剩余空间的大小。
  12. 根据权利要求7至11中任一项所述的电子设备,其特征在于,所述电子设备还包括:
    内核线程,用于将所述第三驱动队列中的数据包读取至内核层。
  13. 一种电子设备,其特征在于,所述电子设备包括至少一个存储器和至少一个处理器,所述至少一个存储器用于存储程序,所述至少一个处理器用于运行所述程序,以实现如权利要求1至6中任一项所述的数据包传输的方法。
  14. 一种计算机可读存储介质,其特征在于,包括计算机指令,当所述计算机指令在电子设备上运行时,使得所述电子设备执行如权利要求1至6中任一项所述的数据包传输的方法。
  15. 一种芯片,其特征在于,包括至少一个处理器和接口电路,所述接口电路用于为所述至少一个处理器提供程序指令或者数据,所述至少一个处理器用于执行所述程序指令,以实现如权利要求1至6中任一项所述的数据包传输的方法。
PCT/CN2022/111210 2021-10-20 2022-08-09 数据传输的方法及电子设备 WO2023065782A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
RU2021130496 2021-10-20
RU2021130496 2021-10-20

Publications (1)

Publication Number Publication Date
WO2023065782A1 true WO2023065782A1 (zh) 2023-04-27

Family

ID=86058785

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/111210 WO2023065782A1 (zh) 2021-10-20 2022-08-09 数据传输的方法及电子设备

Country Status (1)

Country Link
WO (1) WO2023065782A1 (zh)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102723099A (zh) * 2011-03-28 2012-10-10 西部数据技术公司 包括用于处理多命令描述符块以便利用并发性的主机接口的闪存装置
CN109787759A (zh) * 2019-01-23 2019-05-21 郑州云海信息技术有限公司 一种数据传输方法、系统、装置及计算机可读存储介质
CN111639043A (zh) * 2020-06-05 2020-09-08 展讯通信(上海)有限公司 一种通信装置

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102723099A (zh) * 2011-03-28 2012-10-10 西部数据技术公司 包括用于处理多命令描述符块以便利用并发性的主机接口的闪存装置
CN109787759A (zh) * 2019-01-23 2019-05-21 郑州云海信息技术有限公司 一种数据传输方法、系统、装置及计算机可读存储介质
CN111639043A (zh) * 2020-06-05 2020-09-08 展讯通信(上海)有限公司 一种通信装置

Similar Documents

Publication Publication Date Title
WO2020244623A1 (zh) 一种空鼠模式实现方法及相关设备
WO2021052482A1 (zh) 切换sim卡的方法、装置及电子设备
US20230072048A1 (en) Electronic device and method for electronic device processing received data packet
WO2022048371A1 (zh) 跨设备音频播放方法、移动终端、电子设备及存储介质
US11341981B2 (en) Method for processing audio data and electronic device therefor
US10973039B2 (en) Method for data transmission and terminal
CN116795753A (zh) 音频数据的传输处理的方法及电子设备
CN112788694B (zh) 用于缩短呼叫连接时间的方法及其电子装置
WO2023065782A1 (zh) 数据传输的方法及电子设备
WO2024037025A1 (zh) 无线通信电路、蓝牙通信切换方法和电子设备
WO2022228015A1 (zh) 一种数据传输方法及设备
WO2022184157A1 (zh) 多路径聚合调度方法及电子设备
WO2022068646A1 (zh) 一种数据传输方法及电子设备
WO2021135713A1 (zh) 一种文本转语音的处理方法、终端及服务器
WO2021169369A1 (zh) 一种数据传输方法、装置及系统
WO2020140186A1 (zh) 无线音频系统、音频通讯方法及设备
WO2023001044A1 (zh) 数据处理方法及电子设备
WO2023039890A1 (zh) 视频传输的方法和电子设备
WO2022042202A1 (zh) 媒体文件传输的方法及装置
WO2022228065A1 (zh) 功能跳转方法及电子设备
WO2022228005A1 (zh) 一种消息提醒方法及终端设备
CN116709432B (zh) 一种缓存队列调整方法及电子设备
WO2021244160A1 (zh) 一种通信方法及装置
US20220232647A1 (en) Electronic device for transmitting data in bluetooth network environment, and method therefor
CN116684036B (zh) 数据处理方法及相关装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22882403

Country of ref document: EP

Kind code of ref document: A1