CN107623926B - Communication method, server and base station equipment - Google Patents

Communication method, server and base station equipment Download PDF

Info

Publication number
CN107623926B
CN107623926B CN201610560269.4A CN201610560269A CN107623926B CN 107623926 B CN107623926 B CN 107623926B CN 201610560269 A CN201610560269 A CN 201610560269A CN 107623926 B CN107623926 B CN 107623926B
Authority
CN
China
Prior art keywords
processing
packet
physical layer
server
base station
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201610560269.4A
Other languages
Chinese (zh)
Other versions
CN107623926A (en
Inventor
王澄
龚朝华
李栋
万燕
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nokia Shanghai Bell Co Ltd
Original Assignee
Nokia Shanghai Bell Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nokia Shanghai Bell Co Ltd filed Critical Nokia Shanghai Bell Co Ltd
Priority to CN201610560269.4A priority Critical patent/CN107623926B/en
Publication of CN107623926A publication Critical patent/CN107623926A/en
Application granted granted Critical
Publication of CN107623926B publication Critical patent/CN107623926B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Mobile Radio Communication Systems (AREA)

Abstract

Embodiments of the present disclosure relate to a communication method, a server, and a base station apparatus. A centralized processing scheme based on a high-speed processing device and a general-purpose processor is proposed. According to the scheme, part of physical layer processing is implemented by high-speed processing equipment, and other physical layer processing is implemented by a general processor, so that the calculation performance is improved, and the time delay is reduced. In addition, a base station migration scheme based on the label is also provided. According to the scheme, the label is added in the wireless data packet, so that the label is reserved in the physical layer processing, the base station migration is facilitated, and the centralized processing efficiency is improved.

Description

Communication method, server and base station equipment
Technical Field
Embodiments of the present disclosure relate to the field of communications, and more particularly, to a communication method, a server, and a base station apparatus.
Background
The base station centralized processing technology is an important wireless communication technology, and can bring communication with optimized performance such as low cost, high bandwidth and low time delay. For example, in a wireless communication network of a cloud Radio Access Network (RAN) architecture, radio frequency devices such as Radio Remote Heads (RRHs) and physical layer (layer 1) devices may be located at remote ends, which in turn are connected to a baseband unit (BBU) pool by cables, optical cables, etc. Through the BBU pool, the centralized management can be carried out on the BBUs of different base stations. On the one hand, this saves maintenance costs of the base station equipment considerably. On the other hand, in the BBU pool, high-bandwidth and low-delay data transmission can be performed between BBUs of different base stations, which brings gains in bandwidth and delay.
In a centralized pool of computing resources, such as a pool of BBUs, multiple machines may be included, each machine may have multiple processors, each of which may have multiple cores (cores). In order to save the implementation cost of the base station, a base station virtualization technology is also provided.
Disclosure of Invention
In general, embodiments of the present disclosure propose a physical layer processing method and corresponding device based on a high-speed processing device and a general-purpose processor, and a method and device for label-based resource management.
In a first aspect, embodiments of the present disclosure provide a communication method. The communication method comprises the following steps: receiving, at a server cluster associated with a base station, a packet from the base station; performing, by a high-speed processing device of a server cluster, processing of successive processing stages in a physical layer processing chain for a packet; and sending the processed packet to a general purpose processor of the server cluster for further processing by the general purpose processor.
In a second aspect, embodiments of the present disclosure provide a method of communication. The communication method comprises the following steps: receiving a packet from a general purpose processor of a server cluster associated with a base station, wherein the general purpose processor performs processing of successive processing stages in a physical layer processing chain for the packet; performing, by a high-speed processing device of the server cluster, processing of remaining successive processing stages in the physical layer processing chain for the packet; and transmitting the processed packet to the base station.
In a third aspect, embodiments of the present disclosure provide a communication method. The communication method comprises the following steps: receiving, at a server cluster associated with a base station, a packet from the base station, the packet including a tag indicating one of a plurality of processing blocks included by the server cluster; and distributing the packet to the processing block indicated by the label; and performing physical layer processing on the packet by the processing block.
In a fourth aspect, embodiments of the present disclosure provide a method of communication. The communication method comprises the following steps: receiving, at a server cluster associated with a base station, a packet from a general purpose processor included in the server cluster; adding a label to the packet, the label indicating a cell to which the packet is to be sent; performing, by a high-speed processing device, physical layer processing on the packet; and transmitting the packet to the base station for transmission to the terminal device.
In a fifth aspect, embodiments of the present disclosure provide a method of communication. The communication method comprises the following steps: receiving, at a base station, first data from a terminal device; adding a tag to the first data to form a first packet, the tag indicating a target server in a server cluster associated with the base station and a first processing block in the target server to which the first packet is to be sent; and sending the first packet to the first processing block.
In a sixth aspect, embodiments of the present disclosure provide a server. The server includes at least one high-speed processing device configured to: receiving a packet from a base station; performing processing of successive processing stages in a physical layer processing chain for the packet; and sending the processed packet to a general purpose processor of the server for further processing by the general purpose processor.
In a seventh aspect, embodiments of the present disclosure provide a server. The server includes at least one high-speed processing device configured to: receiving a packet from a general purpose processor of a server, wherein the general purpose processor performs processing of successive processing stages in a physical layer processing chain for the packet; performing processing of remaining successive processing stages in the physical layer processing chain for the packet; and transmitting the processed packet to the base station.
In an eighth aspect, embodiments of the present disclosure provide a server. The server includes: at least one high speed processing device configured to: receiving a packet from a base station, the packet including a tag indicating one of a plurality of processing blocks included by at least one high-speed processing device; and distributing the packet to the processing block indicated by the label; and performing physical layer processing on the packet by the processing block.
In a ninth aspect, embodiments of the present disclosure provide a server. The server includes: at least one high speed processing device configured to: receiving a packet from a general-purpose processor included in a server; adding a label to the packet, the label indicating a cell to which the packet is to be sent; performing physical layer processing on the packet; and transmitting the packet to the base station for transmission to the terminal device.
In a tenth aspect, embodiments of the present disclosure provide a base station apparatus. The base station apparatus includes: a radio frequency module configured to: receiving first data from a terminal device; adding a label to the first data to form a first packet, the label indicating a target server in a server cluster associated with the base station device to which the first packet is to be sent and a first processing block in the target server; and sending the first packet to the first processing block.
This summary is provided to introduce a selection of concepts in a simplified form that are further described below in the detailed description. This summary is not intended to identify key features or essential features of the disclosure, nor is it intended to be used to limit the scope of the disclosure.
Drawings
The above and other objects, features and advantages of the present disclosure will become more apparent by describing in greater detail exemplary embodiments thereof with reference to the attached drawings, in which like reference numerals generally represent like parts throughout.
FIG. 1 illustrates an example environment in which embodiments of the present disclosure may be implemented;
fig. 2 shows a block diagram of a Long Term Evolution (LTE) uplink processing flow according to the prior art;
fig. 3 shows a block diagram of an LTE downlink processing flow according to the prior art;
FIG. 4 illustrates a block diagram of a centralized processing architecture based on high-speed processing devices and general purpose processors, according to an embodiment of the present disclosure;
fig. 5 shows a block diagram of an uplink processing flow according to an embodiment of the present disclosure;
fig. 6 shows a flow chart of a communication method according to an embodiment of the present disclosure;
fig. 7 shows a flow chart of a communication method according to an embodiment of the present disclosure;
FIG. 8 illustrates a block diagram of a server cluster for resource management, according to the prior art;
FIG. 9 illustrates a frame format of a tag-based communication protocol according to an embodiment of the present disclosure;
FIG. 10 shows a block diagram of the basic architecture of an FPGA according to an embodiment of the present disclosure;
fig. 11 shows a block diagram of a packet switched architecture according to an embodiment of the present disclosure;
fig. 12 shows a flow chart of a communication method according to an embodiment of the present disclosure;
fig. 13 shows a flow chart of a communication method according to an embodiment of the present disclosure;
fig. 14 shows a flow chart of a communication method according to an embodiment of the present disclosure;
FIG. 15 shows a block diagram of a server according to an embodiment of the present disclosure;
FIG. 16 shows a block diagram of a server according to an embodiment of the present disclosure;
FIG. 17 shows a block diagram of a server according to an embodiment of the present disclosure;
FIG. 18 shows a block diagram of a server according to an embodiment of the present disclosure; and
fig. 19 shows a block diagram of a base station apparatus according to an embodiment of the present disclosure.
Detailed Description
The principles of the present disclosure will now be described with reference to a number of exemplary embodiments. It should be understood that these embodiments are described only to enable those skilled in the art to better understand and implement the present disclosure, and are not intended to limit the scope of the present disclosure in any way.
The term "terminal device" as used herein refers to any terminal device capable of communicating with the base station. As an example, the terminal device may include a Mobile Terminal (MT), a Subscriber Station (SS), a Portable Subscriber Station (PSS), a Mobile Station (MS), or an Access Terminal (AT).
The terms "include" and variations thereof as used herein are inclusive and open-ended, i.e., "including but not limited to. The term "based on" is "based, at least in part, on". The term "one embodiment" means "at least one embodiment"; the term "another embodiment" means "at least one additional embodiment". Relevant definitions for other terms will be given in the following description.
Fig. 1 illustrates a communication network 100 in which embodiments of the present disclosure may be implemented. The communication network 100 shown in fig. 1 may include a cluster of servers 102 and a base station 104. The base station 104 includes an antenna 106 and a radio frequency module 108. It should be understood that the number of base stations and servers shown in fig. 1 is for illustration purposes only and is not intended to be limiting. In practical applications, there may be any suitable number of base stations and servers.
Communication between base station 104 and terminal devices (not shown) may be implemented in accordance with any suitable communication protocol, including, but not limited to, first-generation (1G), second-generation (2.5G), third-generation (3G), fourth-generation (4G) communication protocols, fifth-generation (5G) communication protocols, and/or any other protocol now known or later developed.
Base stations 104 and terminal devices (not shown) may use any suitable wireless communication technology including, but not limited to, code Division Multiple Access (CDMA), frequency Division Multiple Access (FDMA), time Division Multiple Access (TDMA), frequency Division Duplex (FDD), time Division Duplex (TDD), multiple Input Multiple Output (MIMO), orthogonal frequency division multiple access (OFDM), and/or any other technology now known or later developed.
As shown in fig. 1, a base station 104 is associated with a server cluster 102. For example, the base station 104 may be communicatively connected to the server cluster 102 by cables, optical fibers, and the like. In the centralized processing architecture, the base station 104 may also be referred to as a Radio Remote Unit (RRU), and the server cluster 102 may also be referred to as a baseband unit (BBU) pool. Communication between the base station 104 and the server cluster 104 may be referred to as fronthaul (frontaul). It will be understood that the base station 104 and the server cluster 102 may collectively implement the functionality of a conventional base station. The term "legacy base station" as used herein may refer to a node B (NodeB or NB), an evolved node B (eNodeB or eNB), a low power node such as a pico base station, a femto base station, etc. Thus, it should be understood that the terms "server" and "machine" as used herein may be interchanged, and "BBU pool" and "server cluster" may also be interchanged, without departing from the principles of the present disclosure.
In recent years, the development of general-purpose processors has been very rapid. Advances in technologies such as multiple CPUs, multi-cores, single Instruction Multiple Data Streams (SIMDs), and large capacity on-chip caches have enabled processing of multiple radio stacks on one general purpose processor server. One advantage of a general purpose processor is backward compatibility, ensuring that software can be run on new generations of processors without any changes. This is very beneficial for smooth upgrade of the radio stack.
Another advantage of general purpose processors is the support of virtualization techniques. Virtualization technology allows multiple Virtual Machines (VMs) to be simultaneously allowed on the same physical machine. That is, virtual base stations are isolated from each other and easily support multi-standard operation on an open platform. Depending on the context, the term "base station" may also refer to such a virtual base station.
General purpose processors are very good at scheduling and the like, but not good at computing, due to architecture and the like. In contrast, a dedicated hardware processing device such as a Field Programmable Gate Array (FPGA) is a device that is adept at computing and has low power consumption. Therefore, if the FPGA is combined with the general-purpose processor and the respective advantages are fully utilized, the FPGA can be used more fully, the performance is obviously improved, the system power is reduced, and the communication delay is reduced.
In radio access networks, many physical layer processing algorithms are computationally intensive. Taking LTE as an example, turbo decoding and FFT/IFFT processing are suitable for processing by FPGAs, while some functions are particularly suitable for processing by general purpose processors, such as MAC scheduling algorithms. Fig. 2 and 3 show respective processing stages (stages) of an LTE uplink processing flow 200 and an LTE downlink processing flow 300, respectively.
As shown in fig. 2, in LTE uplink, packets from the radio frequency module 108 are sent to the MAC layer 220 through physical layer processing, and are further processed by the MAC layer 220. The physical layer process is a chain structure and is called a physical layer process chain or a signal process chain, which may also be referred to simply as a process chain (chain). As shown in fig. 2, the physical layer processing chain for the LTE uplink may include Fast Fourier Transform (FFT) 206, channel estimation and signal-to-noise ratio (SNR) estimation 206, equalization 210, inverse Discrete Fourier Transform (IDFT) 212, demapping 214, turbo decoding 216, and Cyclic Redundancy Check (CRC) 218. The details of these processing stages are well known in the art and will not be described further. Of these processing stages, FFT 206, IDFT 212, and Turbo decoding 216 belong to the computationally intensive processing stages.
As shown in fig. 3, in the LTE downlink, a packet from the MAC layer 220 is transmitted to the radio frequency module 108 through physical layer processing to be transmitted to the terminal device. As shown in fig. 2, the physical layer processing chain of the LTE downlink may include CRC 306, turbo coding 308, interleaving 310, scrambling 312, modulation 314, precoding 316, resource element mapping (RE-mapping) and reference signal insertion (RS insertion) 318, inverse fast fourier transform/cyclic prefix insertion (IFFT/CP) 320. The details of these processing stages are well known in the art and will not be described further. Of these processing stages, only the IFFT/CP 320 belongs to the computationally intensive processing stages.
It should be appreciated that although the processing chain is described herein primarily in connection with LTE, the inventive concepts of the present disclosure may be applied to other communication protocols, now existing or later developed, and the disclosure is not limited in this respect.
Because FPGAs have higher performance in the case of unit energy consumption, the combination of a general purpose processor and an FPGA can achieve higher performance, lower power consumption, and better compatibility. In practical systems, the computationally intensive physical layer processing may be performed by an FPGA, the MAC layer, etc., may be performed by a general purpose processor. This will be explained in detail below.
Existing FPGA chips can support physical layer processing for up to 6 LTE cells, and one x86CPU core can support MAC layer processing for two cells. Thus, an x86 server with multiple CPU cores and an FPGA card can support the radio stack of multiple cells. As more FPGA cards are inserted, one server can support more cells.
It should be noted that although embodiments of the present disclosure are primarily described in conjunction with FPGAs, it should be understood that this is merely exemplary and is not intended to limit the scope of the present disclosure in any way. Any device having the same or similar processing characteristics as an FPGA may be used in place of or in conjunction with an FPGA to implement embodiments of the present disclosure. In embodiments of the present disclosure, these devices may be referred to as "high-speed processing devices".
Fig. 4 shows a schematic diagram of an FPGA and general processor based radio access network architecture 400 according to an embodiment of the present disclosure. The architecture 400 includes a radio frequency module 108 that communicates with a core network 420 through a server 418. It should be understood that although only one server 418 is shown in FIG. 4, the number shown is for illustration purposes only and is not intended to be limiting. In server cluster 102, there may be any suitable number of servers 418. Any suitable number of processors may be mounted on a motherboard (not shown) of server 418, each processor may have any suitable number of cores, and each core may have any suitable number of threads. In the example of FIG. 4, only one processor 414 is shown, including 8 cores, core-0 through core-7, for simplicity.
In addition, the server 418 also includes an FPGA card 404 that is connected to a Root Complex (RC) 412 of the server 418 via a PCI Express (PCIe) interface 410. The FPGA card 404 is also connected to the radio frequency module 402 via an NGFI interface 406. All or part of the processing of the upstream and downstream physical layer processing may be done in an FPGA. RC 412 is connected to core network 420 through a Network Interface Card (NIC) 416. Under this architecture 400, the FPGA card 404 will compete with the NIC card 416 for PCIe bandwidth, and therefore, it is important how to reduce the PCIe bandwidth requirements of the FPGA card 404.
FPGA card 404 may also include a plurality of processing blocks (processing blocks) 408. Fig. 4 shows 6 processing blocks. In some embodiments, a processing block may be a module or unit in an FPGA or other device. As described above, one FPGA card 404 may support physical layer processing for six cells, and thus the FPGA card 404 may be divided into six processing blocks, each of which supports processing for one cell. It should be understood that the numbers are for illustration purposes only and are not intended to be limiting, and that the FPGA card may be divided into any suitable number of processing blocks.
Since both general purpose processors and FPGAs can be used for signal processing, what physical layer functions should be implemented by the FPGA is a technical problem to be solved. Taking LTE as an example, in the uplink processing chain as shown in fig. 2, FFT 206, IDFT 212 and Turbo decoding 216 are suitably processed by an FPGA, while other physical layer processing is suitably processed by a general purpose processor. If so implemented, the signal is toggled between the FPGA and the general purpose processor, thereby introducing high latency, degrading system performance, which is a significant departure from the trend in modern communication systems.
To this end, embodiments of the present disclosure provide a solution to this problem, which is shown in detail in fig. 5-7. The upstream solution is described in detail below in connection with fig. 5 and 6.
Fig. 5 shows a schematic diagram of an upstream communication flow 500 according to an embodiment of the present disclosure. Compared with the prior art communication flow of fig. 2, the PCIe card 204 is added to support the FPGA, and other blocks are the same as those in fig. 2 and are not described again. Fig. 6 shows a flow chart of an upstream communication method 600 according to an embodiment of the disclosure. At step 602, at the server cluster 102, a packet is received from the base station 104. As described with respect to fig. 1, the base station 104 is associated with the server cluster 102, e.g., communicatively coupled by cables, light, etc. At step 604, physical layer processing is performed on the packet by the high speed processing devices in the server cluster 102. In some embodiments, the high speed processing device may be an FPGA, which may be the FPGA card 404 within the server 418 shown in fig. 4. The physical layer processing may include processing of successive processing stages in a physical layer processing chain as shown in fig. 5. At step 606, the processed packet is sent to the general purpose processor for further processing by the general purpose processor, such as the remaining physical layer processing and MAC layer processing. With the approach shown in fig. 6, there is only one signaling between the high-speed processing device (e.g., FPGA) and the general-purpose processor, and the problem of switching back and forth between different processing stages of the physical layer described above does not exist, thereby significantly reducing PCIe bandwidth requirements.
As described above, high speed processing devices such as FPGAs have greater computational performance, and thus, processing at the computationally intensive processing stages of the physical layer processing chain, e.g., one or more of FFT 206, IDFT 212, and Turbo decoding 216, may be performed by the FPGA at step 604.
In addition, as shown in fig. 5, the input-output bandwidth of each processing stage may be different, and therefore, this characteristic may be used to select the allocation of the processing chain. When migrating from an FPGA to a general-purpose processor, the processed data rate may be much less than the in-phase/quadrature (I/Q) data rate. As such, PCIe bandwidth is no longer a bottleneck.
For example, as shown in FIG. 5, if three consecutive processing stages, MAC layer 220, CRC218, and Turbo decode 216, are implemented in a general purpose processor, while the remaining other processing stages are implemented in an FPGA, the PCIe bandwidth required by migrating data from the FPGA to the general purpose processor memory may be 1.818Gbps, as determined by the input bandwidth of Turbo decode 216 (i.e., the output bandwidth of demapper 214). If only the MAC layer 220 is implemented in a general purpose processor, the PCIe bandwidth required to migrate demodulated data from the FPGA to the general purpose processor memory may be only 75.376Mbps, as determined by the input bandwidth of the MAC layer (i.e., the output bandwidth of CRC 218). Too high of a PCIe interface I/O throughput may impact the real-time performance of the general purpose processor and FPGA and compete for PCIe bandwidth with other PCIe devices, such as ethernet NIC 416 shown in fig. 4. Therefore, PCIe bandwidth considerations need to be taken into account in determining which physical layer functions to implement in the FPGA. In other words, the processing of successive processing stages in the physical layer processing chain, which terminate in a processing stage with a small output bandwidth, can be performed by the FPGA. Still further, in some embodiments, the processing of all processing stages in the physical layer processing chain may also be performed by the FPGA. In this way, the processing stages handled by the FPGA have a small output bandwidth overall and include all the computationally intensive processing stages.
It will be understood that the specific values of the input-output bandwidths above are provided to illustrate the principles of the present disclosure, and not to limit the scope thereof. In different scenarios, each processing stage may have a different input-output bandwidth.
A flow chart of a downlink communication method 700 according to an embodiment of the disclosure is described below with reference to fig. 7. At step 702, a packet from a general purpose processor is received by a high speed processing device in the server cluster 102. For example, the high speed processing device may be the FPGA card 404 shown in FIG. 4 and the general purpose processor may be the general purpose processor 414 shown in FIG. 4. The received packet may have been processed by the general purpose processor 414 at successive processing stages of the physical layer processing chain at this point. It should be understood that although the expression sequential processing stages is used herein, a general purpose processor may not perform any physical layer processing. At step 704, processing of the remaining successive processing stages in the physical layer processing chain is performed for the packet by the high speed processing device. In other words, as in the uplink, the two parts of the physical layer processing stage are processed separately by a high speed processing device such as an FPGA and a general purpose processor, so that only one communication occurs between the high speed processing device and the general purpose processor, without a toggle situation. At step 706, the processed packet is transmitted to the base station 104.
Returning to fig. 3, the processing chain for LTE downlink is shown. In the downlink, IFFT is the only computationally intensive processing task. Similar to the upstream processing, in step 704, a computationally intensive processing stage (e.g., IFFT) in the physical layer processing chain may be performed. Alternatively or additionally, processing of successive processing stages in the physical layer processing chain with a small input bandwidth as a whole may be performed. In other words, these successive processing stages start with a processing stage having a small input bandwidth. Since the processing of the downlink and the uplink basically correspond, the description is omitted.
In some embodiments, processing of all processing stages of the physical layer processing chain is performed by a high speed processing device, such as an FPGA, in the event that the performance of the general purpose processor is below a predetermined threshold. This may be the case, for example, where the number of cores of the general purpose processor is small and the computational power is insufficient, thus requiring the FPGA to share the appropriate work.
In other examples, processing of a portion of the processing stages of the physical layer processing chain is performed by a high speed processing device, such as an FPGA, in the event that the performance of the general purpose processor is above another predetermined threshold. In this case, since the processing power of the general-purpose processor is strong, processing at a partial processing stage can be performed to appropriately share the work of the high-speed processing device. For example, since the downlink has only one computation-intensive processing stage IFFT, a high-speed processing device may perform only the processing of the IFFT processing stage.
A label-based packet switching scheme will be described below in conjunction with fig. 8-14. As previously described, a typical FPGA can support physical layer processing for six typical cells. Thus, the FPGA resources may be divided into multiple processing blocks that may operate in parallel and each processing block may be able to take on the peak traffic of one cell.
One characteristic of mobile services is periodicity. The mobile service shows a periodicity with a period of 24 hours due to an on-time, an off-time, and the like. Typically, the processing load in a server cluster is high from 9 am to 11 pm and low during the night. During idle time (e.g., nighttime), if all base stations hosted by one machine can be migrated to another machine, the idle machine can be shut down.
The Next Generation Forward Interface (NGFI) is a protocol used in the development. The NGFI protocol plans to support wireless packet routing, i.e., an NGFI device may redirect a wireless data stream to a new destination address, such as an FPGA, based on the destination NGFI address. The fronthaul switch may be used to implement this functionality, causing wireless I/Q data destined for the source BBU to be redirected to the target BBU. The NGFI interface has been widely discussed in the industry to provide flexible routing between the BBU pool and the RRUs and to modify the fronthaul network from point-to-point connections to many-to-many networks using packet switching protocols.
The NGFI has three logical layers and introduces a special packet header to transport wireless I/O data in a packet-switched forwarding network. However, after packet switching is performed via the switch, headers are generally removed, so that various information of data cannot be determined, and thus appropriate analysis and scheduling of data cannot be provided. Accordingly, a tag for inclusion in a wireless I/O data packet is presented herein. The tag may indicate a cell, an antenna, a target machine, a target FPGA processing block, and the tag is not removed during physical layer processing.
Fig. 8 shows a schematic diagram of base station migration according to the prior art. As shown in fig. 8, BBU pool 800 includes a plurality of machines, namely machines 802, 804, 806, and 808. The machine 802 includes a pool resource manager therein for pooling and scheduling resources of the BBU pool 800. Machines 804, 806, and 808 include respective local resource managers 814, 816, and 818, respectively. Local resource managers 814, 816, and 818 can monitor the operating conditions of each machine and periodically report the status of each machine to pool resource manager 812 of machine 802, respectively. The pool resource manager may make decisions based on the operating conditions of the various machines. Since the technique shown in fig. 8 is known in the art, it will not be described in detail.
Fig. 9 illustrates a tag format that may be used in the NGFI protocol. The NFGI protocol is still under development, where the header formats of the NGFI PHY carrier layer, the NGFI data adaptation layer, and the NGFI data layer are still undetermined. As shown in fig. 9, an NGFI data layer Service Data Unit (SDU) includes a tag and wireless I/Q data. The tag may include five fields: cell ID, antenna ID, target machine ID, target FPGA ID, and target processing block ID. The cell ID indicates the cell associated with the wireless data packet, e.g., the cell to which the packet belongs. The antenna ID indicates the antenna associated with the wireless data packet, i.e., the antenna from or to which the packet is destined. The target machine ID indicates the target machine to which the source base station is migrated. In the case where one machine includes multiple FPGAs, the target FPGA ID indicates the FPGA to which the packet is going. The target processing block ID indicates which processing block of the FPGA the packet is processed by.
It should be noted that although five fields are mainly shown here, in practical applications, not all of these fields are necessarily required, and some fields may be omitted, and some other fields may be added. Possible applications of these fields will be described below in connection with the scenario of base station migration. However, it should be understood that the application of these tags may not be limited to the following example scenarios.
Fig. 10 shows a block diagram of an architecture of FPGA404 in accordance with an embodiment of the present disclosure. The FPGA404 may include an NGFI interface 406, a distributor/multiplexer 1024, a processing block 408, a tag remover/packet assembler 1026, and a PCI interface 410.
NFGI interface 406 interfaces with a radio frequency module (e.g., radio frequency module 108 of fig. 1) to receive NFGI packets. In the uplink, distributor/multiplexer 1024 may distribute the header-removed packets to the appropriate processing blocks. In the downlink, distributor/multiplexer 1024 multiplexes packets from different processing blocks and then sends to NFGI interface 406.
For simplicity, three processing blocks 418, 428, and 438 are shown in FIG. 10. It is to be understood that this number is for illustrative purposes only and is not intended to be limiting. In fig. 10, each processing block 418, 428, and 438 performs uplink and downlink processing, respectively, for one cell.
In the uplink, the tag remover/packet assembler 1026 may remove the tags of the packet and send them to the general purpose processor via the PCIe interface 410 for MAC layer processing by the general purpose processor. In the downlink, a tag remover/packet assembler 1026 may add tags or the like to the packets.
A method 1200 for tag-based communication in the uplink is described below in conjunction with fig. 12. As shown in fig. 12, at the server cluster 102, a packet is received from the base station 104 at step 1202, the packet including a tag indicating one of a plurality of processing blocks included by the server cluster 102. At step 1204, the packet is distributed to the processing block indicated by the label. This step may be implemented by distributor/multiplexer 1024 shown in fig. 10. In step 1206, physical layer processing is performed on the packet by the processing block. The packet may be processed, for example, by the method shown in fig. 6.
In some embodiments, the plurality of processing blocks are included in an FPGA of the server cluster 102. For example, as shown in fig. 10, a plurality of processing blocks 418, 428, and 438 are included in FPGA 404.
In some embodiments, the tag may also indicate the cell associated with the packet, e.g., the tag may also include a cell ID as shown in fig. 9. The communications method 1200 may also include removing the tag from the packet and storing the packet in a buffer corresponding to the cell for transmission to a general purpose processor for further processing. Access between the buffers of the FPGA and the memory of the general purpose processor may be accomplished using existing Direct Memory Access (DMA) methods.
Although only two fields of a tag are described above in connection with fig. 12, those skilled in the art will appreciate that the communication method 1200 may also utilize all five fields as shown in fig. 9.
A method 1300 of tag-based communication in the downlink is described below in conjunction with fig. 13. As shown in FIG. 13, at step 1302, a packet is received at the server cluster 102 from a general purpose processor of the server cluster 102. At step 1304, a label is added to the packet, which may indicate the cell to which the packet is to be sent, e.g., the cell ID as shown in fig. 9. At step 1306, physical layer processing is performed on the packet. For example, the method shown in fig. 7 may be used to perform physical layer processing on a packet. In step 1308, the packet is sent to base station 104 for transmission to the terminal device.
In some embodiments, an indication of the antenna to which the packet is to be sent, such as the antenna ID as shown in fig. 9, may also be added to the tag in step 1304. For example, an antenna ID may be added to the tag of a packet after the packet is precoded (e.g., as implemented by precoding stage 316 shown in fig. 3).
Although only two fields of a tag are described above in connection with fig. 12, those skilled in the art will appreciate that the communication method 1200 may also utilize all five fields as shown in fig. 9.
A method 1400 for tagged communication in the uplink is described below in conjunction with fig. 14. At step 1402, data from a terminal device, referred to as first data for purposes of distinction, is received at the base station 104. At step 1404, a tag is added to the first data to form a first packet, the tag indicating a target server in the server cluster 102 associated with the base station 104 and a first processing block in the target server to which the first packet is to be sent. At step 1406, the first packet is sent to a first processing block.
In some embodiments, the label of the first packet further indicates a first cell associated with the first packet. If the load of the first processing block falls below a first threshold (e.g., in the case of nighttime), a tag is added to the second data associated with the first cell to form a second packet when the second data is received. The label may indicate the first cell and a second processing block different from the first processing block. Load monitoring may be implemented by the scheme described above in connection with fig. 8 to determine how the load of a first processing block varies and a second processing block that is capable of accommodating the load of the first processing block.
Alternatively or additionally, the pool resource manager 812 as shown in FIG. 8 can monitor packet conditions based on the machine ID and FPGA ID included in the tag. As an example, if pool resource manager 812 determines, based on the tag of a packet, that the number of packets destined for a machine (indicated by the tagged machine ID) is below a certain threshold, it determines that all of the load of the machine needs to be offloaded to idle the machine. Similarly, if the machine includes multiple FPGAs, different FPGAs may be indicated by the FPGA ID. In this case, if pool resource manager 812 determines, based on the tags of the packets, that the number of packets destined for a certain FPGA (as indicated by the FPGA ID of the tag) is below a certain threshold, it determines that all of the FPGA's load needs to be offloaded to idle that FPGA. The problem with load migration between machines and between FPGAs can be achieved by routing with the destination address of the NGFI protocol. The details of this scheme are described in detail later with reference to fig. 11, and are not described again here.
In some embodiments, the first processing block processes the first packet and a third packet different from the first packet in parallel. The labels of the first and third packets also indicate the first and second cells associated with the first and third packets, respectively. In this case, if the load of the first processing block increases above the second threshold (e.g., in the early morning case), upon receiving third data associated with one of the first and second cells, a label may be added to the third data to form a third packet, the label indicating the one cell and the third processing block. The third processing block may be an idle processing block or a less loaded processing block that is able to withstand processing of subsequent packets associated with the first cell or the second cell. The determination of the third processing block may likewise be determined by the method illustrated in fig. 8.
Alternatively or additionally, the pool resource manager 812 as shown in FIG. 8 can monitor packet conditions based on the machine ID and FPGA ID included in the tag. As an example, if pool resource manager 812 determines, based on the label of the packet, that the number of packets destined for a machine (indicated by the machine ID of the label) is above a certain threshold (e.g., too heavily loaded, increased latency), it determines that the partial load associated with certain cells of the machine needs to be offloaded to reduce the load of the machine. Similarly, if the machine includes multiple FPGAs, different FPGAs may be indicated by the FPGA ID. In this case, if pool resource manager 812 determines, based on the tag of the packet, that the number of packets destined for a certain FPGA (indicated by the FPGA ID of the tag) is above a certain threshold, it may determine that all of the load of the FPGA needs to be offloaded, so that the load of the FPGA is reduced, thereby reducing latency.
In some embodiments, the tag may also indicate the target FPGA to which the packet is to be sent, e.g., including the target FPGA ID as shown in fig. 9. In this case, the target server includes a plurality of FPGAs including the target FPGA, and the first processing block is one of the plurality of processing blocks of the target FPGA. Alternatively or additionally, the tag may also indicate the antenna associated with the packet, including, for example, the antenna ID as shown in fig. 9.
For completeness, a block diagram of a packet switched architecture is described below in connection with fig. 11. The packet-switched architecture is primarily based on the routing functionality of the NGFI protocol, i.e., the NGFI device may redirect the wireless data stream to a new destination address based on the destination NGFI address. The NGFI protocol has its own address space, and each NGFI device is identified by a unique NGFI address.
If the pool resource manager 812 determines to migrate a base station based on the method described above in connection with FIG. 8, etc., then the target machine and/or target FPGA may be determined first. This information is then added to the destination NGFI address in the NGFI protocol header. Accordingly, the NGFI switch 1118 routes the packet to the corresponding machine and/or FPGA based on the destination NGFI address.
Fig. 11 shows an integrated view 1100 of base station migration in accordance with an embodiment of the disclosure. The view 1100 includes a cluster of servers 102, of which only two, servers 418 and 1118 are shown for simplicity. The server 1118 is configured substantially the same as the server 418, and its internal modules are described in detail above in connection with fig. 4 and thus will not be described again. The pool resource manager (not shown) periodically collects resource usage information for the respective servers and/or respective FPGAs sent by the local resource manager (not shown). If the resource usage of one server (e.g., server 418 as shown) is below a predetermined threshold, all base stations hosted by that server 418 will be allocated to one or more other servers, such as server 1118. The pool resource manager will find the appropriate target server and/or target FPGA and target processing block based on the collected resource usage information. The pool resource manager then triggers a tag update event and sends the target FPGA's NGFI address and the target processing block ID to a wireless packet assembler (not shown) at the radio frequency module 108. The wireless packet assembler encapsulates the corresponding wireless packet with the new destination NGFI address and the new label. The wireless packet may be redirected to the appropriate FPGA processing unit through switch 1108.
However, since the header of the NGFI protocol is removed after passing through the NGFI switch 1108, only the DL SDU including the tag and I/Q data remains as shown in fig. 9. In this case, the destination address in the header of the NGFI protocol cannot address the processing block, and can only rely on the processing block ID in the tag. The migration between processing blocks based on tags is described in detail above and will not be described in detail here.
Then, data forwarding between the source base station and the target base station is started. The target base station is a new instantiated base station on the target server that will take over all tasks in the source base station. For example, as shown in fig. 11, all source base stations hosted by server 418 may be handed off to a target base station instantiated on server 1118. Then, the S1 downlink path from the core network 420 to the NIC 416 of the server 418 is also switched to the S1 downlink path from the core network 420 to the NIC 416 of the server 1118, thereby implementing the migration from the source base station to the target base station.
The tag-based communication method introduced herein allows tags to be retained in the physical layer process, thereby facilitating base station migration and improving the efficiency of the centralized process.
Fig. 15 shows a block diagram of a server 418 according to an embodiment of the present disclosure.
As shown in FIG. 15, server 418 includes at least one high-speed processing device. The high speed processing device includes a receiver 1502 configured to receive packets from a base station. For example, the receiver 1502 may be the NFGI interface 406 as shown in fig. 10. The high speed processing device further comprises a processing block 408 configured to perform processing of successive processing stages in the physical layer processing chain for the packet. The high-speed processing device may also include a transmitter 1506 configured to transmit the processed packet to the general-purpose processor of the server 418 for further processing by the general-purpose processor 414. For example, the transmitter 1506 may be a PCIe interface 410 as shown in fig. 10.
In some embodiments, performing processing of successive processing stages in a physical layer processing chain comprises: processing at the compute-intensive processing stages in the physical layer processing chain is performed. In some embodiments, performing processing of successive processing stages in a physical layer processing chain comprises: processing of successive processing stages in the physical layer processing chain terminating in a processing stage having a small output bandwidth is performed. In some embodiments, performing processing of successive processing stages in a physical layer processing chain comprises: processing of all processing stages in the physical layer processing chain is performed. In some embodiments, the high speed processing device may be a Field Programmable Gate Array (FPGA), for example as shown in fig. 10.
Fig. 16 shows a block diagram of a server 418 according to an embodiment of the present disclosure.
As shown in FIG. 16, server 418 includes at least one high-speed processing device. The high speed processing device includes a receiver 1602 configured to receive packets from a general purpose processor of the server, where the general purpose processor performs processing of successive processing stages in a physical layer processing chain for the packets. For example, the receiver 1602 may be the PCIe interface 410 shown in fig. 10. The high speed processing device further comprises a processing block 408 configured to perform processing of the remaining consecutive processing stages in the physical layer processing chain for the packet. As shown in fig. 16, the high speed processing device can also include a transmitter 1606 configured to transmit the processed packet to a base station. For example, transmitter 1606 may be NFGI interface 406 as shown in fig. 10.
In some embodiments, performing processing of the remaining consecutive processing stages in the physical layer processing chain comprises: the compute-intensive processing stages in the physical layer processing chain are executed. In some embodiments, performing processing of the remaining consecutive processing stages in the physical layer processing chain comprises: the processing of successive processing stages in the physical layer processing chain starting from a processing stage with a small input bandwidth is performed. In some embodiments, performing processing of the remaining consecutive processing stages in the physical layer processing chain may comprise performing processing of all processing stages of the physical layer processing chain if the performance of the general purpose processor is below a first predetermined threshold. Alternatively or additionally, the processing of the Inverse Fast Fourier Transform (IFFT) processing stage of the physical layer processing chain is performed in case the performance of the general purpose processor is above a second predetermined threshold.
In some embodiments, the high speed processing device may be a Field Programmable Gate Array (FPGA), for example as shown in fig. 10.
Fig. 17 shows a block diagram of a server 418 according to an embodiment of the present disclosure.
As shown in FIG. 17, server 418 includes at least one high-speed processing device. The high speed processing device includes a receiver 1702 configured to receive a packet from a base station, the packet including a tag indicating one of a plurality of processing blocks included by at least one high speed processing device. The high-speed processing device may also include a distributor 1024 configured to distribute the packets to the processing blocks indicated by the tags. As shown in fig. 17, the high speed processing device may also include a processing block 408 configured to perform physical layer processing on the packet. In some embodiments, the high speed processing device may be a Field Programmable Gate Array (FPGA), for example as shown in fig. 10.
In some embodiments, the tag further indicates a cell associated with the packet, and the at least one high speed processing device is further configured to: removing the tag from the packet; and storing the packet in a buffer corresponding to the cell for transmission to a general purpose processor for further processing.
Fig. 18 shows a block diagram of a server 418 according to an embodiment of the present disclosure.
As shown in FIG. 18, server 418 includes at least one high-speed processing device. The high speed processing device includes a receiver 1802 configured to receive packets from a general purpose processor included with the server. The receiver 1802 may be the PCIe interface 410 as shown in fig. 10. The high speed processing device also includes an assembler 1026 configured to add a label to the packet, the label indicating the cell to which the packet is to be sent. As shown in fig. 18, the high speed processing device further comprises a processing block 408 configured to perform physical layer processing on the packet; and a transmitter 1808 configured to transmit the packet to a base station for transmission to a terminal device. For example, the transmitter 1808 may be the NGFI interface 406 as shown in fig. 10.
In some embodiments, performing physical layer processing on the packet comprises: an indication of the antenna to which the packet is to be sent is added to the tag. In some embodiments, the high speed processing device may be a Field Programmable Gate Array (FPGA), for example as shown in fig. 10.
Fig. 19 shows a block diagram of a base station apparatus 104 according to an embodiment of the present disclosure. The base station equipment comprises a radio frequency module. The radio frequency module includes a receiver 1902 configured to receive first data from a terminal device; an adder 1904 configured to add a label to the first data to form a first packet, the label indicating a target server in a server cluster associated with the base station apparatus to which the first packet is to be sent and a first processing block in the target server; and a transmitter 1906 configured to transmit the first packet to the first processing block.
In some embodiments, the tag of the first packet further indicates a first cell associated with the first packet, the radio frequency module further configured to: receiving second data associated with the first cell in response to the load of the first processing block decreasing below a first threshold; and adding a tag to the second data to form a second packet, the tag indicating the first cell and a second processing block different from the first processing block.
In some embodiments, the first processing block processes the first packet and a third packet different from the first packet in parallel, the labels of the first packet and the third packet further indicating a first cell and a second cell associated with the first packet and the third packet, respectively, the radio frequency module further configured to: receiving third data associated with one of the first cell and the second cell in response to the load of the first processing block increasing above a second threshold; and adding a label to the third data to form a third packet, the label indicating one cell and the third processing block.
In some embodiments, the tag further indicates a target high speed processing device to which the packet is to be sent, wherein the target server includes a plurality of high speed processing devices including the target high speed processing device, and the first processing block is one of the plurality of processing blocks of the target high speed processing device. Alternatively or additionally, the tag also indicates the antenna associated with the packet. In some embodiments, the high speed processing device comprises a Field Programmable Gate Array (FPGA).
Although the principles of the present disclosure are described herein primarily in connection with the NFGI protocol, those skilled in the art will readily appreciate that these novel concepts may be applied to other communication protocols, whether now existing or later developed, and not limited thereto. It should also be understood that although the tag-based communication method is described above in connection with an FPGA, the method may also be modified to apply to other devices without departing from the principles of the present disclosure.
The modules included in the server 418 and the base station apparatus 104 may be implemented in a variety of ways, including software, hardware, firmware, or any combination thereof, without departing from the principles of the present disclosure. In one embodiment, one or more modules may be implemented using software and/or firmware, such as machine executable instructions stored on a storage medium. In addition to, or in the alternative to, machine-executable instructions, some or all of the modules in server 418 and base station apparatus 104 may be implemented at least in part by one or more hardware logic components. By way of example, and not limitation, exemplary types of hardware logic components that may be used include Field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standards (ASSPs), systems on a chip (SOCs), complex Programmable Logic Devices (CPLDs), and so forth.
In general, the various example embodiments of this disclosure may be implemented in hardware or special purpose circuits, software, logic or any combination thereof. Certain aspects may be implemented in hardware, while other aspects may be implemented in firmware or software which may be executed by a controller, microprocessor or other computing device. While aspects of embodiments of the disclosure have been illustrated or described as block diagrams, flow charts, or using some other pictorial representation, it is well understood that the blocks, apparatus, systems, techniques or methods described herein may be implemented in, as non-limiting examples, hardware, software, firmware, special purpose circuits or logic, general purpose hardware or controller or other computing devices, or some combination thereof.
By way of example, embodiments of the disclosure may be described in the context of machine-executable instructions, such as those included in program modules, executed in devices on a target real or virtual processor. Generally, program modules include routines, programs, libraries, objects, classes, components, data structures, etc. that perform particular tasks or implement particular abstract data types. In various embodiments, the functionality of the program modules may be combined or divided between program modules as described. Machine-executable instructions for program modules may be executed within local or distributed devices. In a distributed facility, program modules may be located in both local and remote memory storage media.
Computer program code for implementing the methods of the present disclosure may be written in one or more programming languages. These computer program codes may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the program codes, when executed by the computer or other programmable data processing apparatus, cause the functions/acts specified in the flowchart and/or block diagram block or blocks to be performed. The program code may execute entirely on the computer, partly on the computer, as a stand-alone software package, partly on the computer and partly on a remote computer or entirely on the remote computer or server.
In the context of this disclosure, a machine-readable medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination thereof. More detailed examples of a machine-readable storage medium include an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical storage device, a magnetic storage device, or any suitable combination thereof.
Additionally, while operations are depicted in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order, or that all illustrated operations be performed, to achieve desirable results. In some cases, multitasking or parallel processing may be beneficial. Likewise, while the above discussion contains certain specific implementation details, this should not be construed as limiting the scope of any invention or claims, but rather as describing particular embodiments that may be directed to particular inventions. Certain features that are described in this specification in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination.
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims.

Claims (8)

1. A method of communication, comprising:
receiving, at a server cluster associated with a base station, a packet from the base station;
performing, by a high-speed processing device of the server cluster, processing of successive processing stages in a physical layer processing chain for the packet, wherein the successive processing stages start at a starting processing stage in the physical layer processing chain and end at a processing stage having a small output bandwidth; and
sending the processed packet to a general purpose processor of the server cluster for further processing by the general purpose processor, the further processing including remaining processing stages in the physical layer processing chain after the processing stage having the small output bandwidth.
2. The communication method of claim 1, wherein performing processing of successive processing stages in a physical layer processing chain comprises:
processing of the compute-intensive processing stages in the physical layer processing chain is performed.
3. The communication method of claim 1, wherein performing processing of successive processing stages in a physical layer processing chain comprises:
processing of all processing stages in the physical layer processing chain is performed.
4. The communication method of claim 1, wherein the high-speed processing device comprises a Field Programmable Gate Array (FPGA).
5. A server, comprising:
at least one high speed processing device configured to:
receiving a packet from a base station;
performing processing of successive processing stages in a physical layer processing chain for the packet, the successive processing stages starting at a starting processing stage in the physical layer processing chain and ending at a processing stage having a small output bandwidth; and
sending the processed packet to a general purpose processor of the server for further processing by the general purpose processor, the further processing including remaining processing stages in the physical layer processing chain after the processing stage having the small output bandwidth.
6. The server of claim 5, wherein performing processing of successive processing stages in a physical layer processing chain comprises:
processing is performed at a computationally intensive processing stage in the physical layer processing chain.
7. The server of claim 5, wherein performing processing of successive processing stages in a physical layer processing chain comprises:
processing of all processing stages in the physical layer processing chain is performed.
8. The server of claim 5, wherein the at least one high-speed processing device comprises a Field Programmable Gate Array (FPGA).
CN201610560269.4A 2016-07-15 2016-07-15 Communication method, server and base station equipment Active CN107623926B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610560269.4A CN107623926B (en) 2016-07-15 2016-07-15 Communication method, server and base station equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610560269.4A CN107623926B (en) 2016-07-15 2016-07-15 Communication method, server and base station equipment

Publications (2)

Publication Number Publication Date
CN107623926A CN107623926A (en) 2018-01-23
CN107623926B true CN107623926B (en) 2023-01-31

Family

ID=61087604

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610560269.4A Active CN107623926B (en) 2016-07-15 2016-07-15 Communication method, server and base station equipment

Country Status (1)

Country Link
CN (1) CN107623926B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112996070B (en) * 2021-03-04 2023-03-24 网络通信与安全紫金山实验室 Data transmission method and system based on distributed non-cellular network

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7986672B2 (en) * 2002-02-25 2011-07-26 Qualcomm Incorporated Method and apparatus for channel quality feedback in a wireless communication
CA2609794C (en) * 2005-05-12 2013-12-03 Qualcomm Incorporated Apparatus and method for channel interleaving in communications system
CN101594707B (en) * 2008-05-29 2012-08-08 国际商业机器公司 Receiving and transmitting unit and data processing system for communication base station
CN103067218B (en) * 2012-12-14 2016-03-02 华中科技大学 A kind of express network packet content analytical equipment
CN103970708B (en) * 2014-03-18 2017-01-04 中国航天科工信息技术研究院 Communication means between a kind of FPGA and general processor and system

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
基于Cloud-RAN的协作式无线接入网架构;徐湛 等;《电子测量与仪器学报》;20150731;第29卷(第7期);第1-8页 *
移动基站虚拟化标准用例研究;张科 等;《电信工程技术与标准化》;20160630;第29卷(第6期);全文 *

Also Published As

Publication number Publication date
CN107623926A (en) 2018-01-23

Similar Documents

Publication Publication Date Title
CN113934660B (en) Accelerating network packet processing
US11249779B2 (en) Accelerator interconnect assignments for virtual environments
US11301275B2 (en) Cross-function virtualization of a telecom core network
CN107852413B (en) Network device, method and storage medium for offloading network packet processing to a GPU
US11296807B2 (en) Techniques to operate a time division multiplexing(TDM) media access control (MAC)
US10085302B2 (en) Access node architecture for 5G radio and other access networks
RU2643626C1 (en) Method of distributing acceptable packages, queue selector, package processing device and information media
US10135599B2 (en) Frequency domain compression for fronthaul interface
US20150009823A1 (en) Credit flow control for ethernet
US10080215B2 (en) Transportation of user plane data across a split fronthaul interface
US20230319520A1 (en) Wireless network access to wireless network slices over a common radio channel
CN107623926B (en) Communication method, server and base station equipment
EP3462690A1 (en) Packet sequence batch processing
JP6415556B2 (en) Method, apparatus, and computer program for allocating computing elements within a data receiving link (computing element allocation within a data receiving link)
EP4231148A1 (en) Method and apparatus for allocating gpu to software package
US11201829B2 (en) Technologies for pacing network packet transmissions
CN103701717A (en) Method, device and system for processing user data in cloud base station
US11050682B2 (en) Reordering of data for parallel processing
TWI491295B (en) A method and apparatus for synchronizing resource allocation instructions in a wireless network
US9319327B2 (en) Packet transmission method, packet transmission apparatus, and storage medium
US11711728B2 (en) Wireless access network element status reporting
CN109996129B (en) Service data processing method and device
US20240114398A1 (en) Adaptive resource allocation for a wireless telecommunication network fronthaul link
US20220256391A1 (en) Layer one execution control
WO2023235016A1 (en) Load estimation and balancing in virtualized radio access networks

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20220112

Address after: 7 / F, 388 ningqiao Road, Pudong New Area pilot Free Trade Zone, Shanghai, 201206

Applicant after: Shanghai NOKIA Baer Software Co.,Ltd.

Address before: No.388, ningqiao Road, Jinqiao, Pudong New Area, Shanghai, 201206

Applicant before: NOKIA SHANGHAI BELL Co.,Ltd.

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant