WO2020035043A1 - A method and apparatus for layer 1 acceleration in c-ran - Google Patents

A method and apparatus for layer 1 acceleration in c-ran Download PDF

Info

Publication number
WO2020035043A1
WO2020035043A1 PCT/CN2019/100937 CN2019100937W WO2020035043A1 WO 2020035043 A1 WO2020035043 A1 WO 2020035043A1 CN 2019100937 W CN2019100937 W CN 2019100937W WO 2020035043 A1 WO2020035043 A1 WO 2020035043A1
Authority
WO
WIPO (PCT)
Prior art keywords
acceleration
worker
layer
queues
subtask
Prior art date
Application number
PCT/CN2019/100937
Other languages
French (fr)
Inventor
Bi WANG
Original Assignee
Nokia Shanghai Bell Co., Ltd.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nokia Shanghai Bell Co., Ltd. filed Critical Nokia Shanghai Bell Co., Ltd.
Publication of WO2020035043A1 publication Critical patent/WO2020035043A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • H04L47/62Queue scheduling characterised by scheduling criteria
    • H04L47/6295Queue scheduling characterised by scheduling criteria using multiple queues, one for each individual QoS, connection, flow or priority
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • H04L47/62Queue scheduling characterised by scheduling criteria
    • H04L47/6215Individual queue per QOS, rate or priority
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • H04L47/62Queue scheduling characterised by scheduling criteria
    • H04L47/625Queue scheduling characterised by scheduling criteria for service slots or service orders
    • H04L47/6275Queue scheduling characterised by scheduling criteria for service slots or service orders based on priority
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W52/00Power management, e.g. TPC [Transmission Power Control], power saving or power classes
    • H04W52/02Power saving arrangements
    • H04W52/0203Power saving arrangements in the radio access network or backbone network of wireless communication networks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W8/00Network data management
    • H04W8/02Processing of mobility data, e.g. registration information at HLR [Home Location Register] or VLR [Visitor Location Register]; Transfer of mobility data, e.g. between HLR, VLR or external networks
    • H04W8/04Registration at HLR or HSS [Home Subscriber Server]

Definitions

  • the present disclosure relates to the field of communication technologies, in particular relates to a technique for Layer 1 acceleration in C-RAN.
  • C-RAN Cloud-Radio Access Network
  • Layer 1 user plane since the operations such as FFT (Fast Fourier Transform) /iFFT (Inverse Fast Fourier Transform) are computing intensive, specialized SOC (System On Chip) /DSP (Digital Signal Processing) chips are usually employed in traditional base stations for acceleration. There are no corresponding acceleration instructions on general-purpose processors such as X86, and therefore Layer 1 user plane data operations can not be carried out effectively.
  • FFT Fast Fourier Transform
  • iFFT Inverse Fast Fourier Transform
  • SOC System On Chip
  • DSP Digital Signal Processing
  • Layer 1 user plane is still located in RAP (Radio Access Point) , and uses DSP chips for acceleration of Layer 1 processing.
  • RAP Radio Access Point
  • VNF Virtualized network function
  • SIMD Single Instruction Multiple Data
  • DSP chips employing DSP chips to fabricate acceleration boards and employing PCIe (Peripheral Component Interface Express, which is a bus and interface standard) interfaces to connect to cloud servers.
  • PCIe Peripheral Component Interface Express, which is a bus and interface standard
  • the objective of the present disclosure is to provide a method and apparatus for Layer 1 acceleration in C-RAN.
  • a method for Layer 1 acceleration in C-RAN comprising:
  • said at least one worker is obtained through integrating system on chip (SOC) and/or digital signal processing (DSP) chips.
  • SOC system on chip
  • DSP digital signal processing
  • said next level target includes at least any one of the following:
  • RRU remote radio unit
  • said next level target includes at least one of said queues
  • the execution result obtained includes generation of at least one new task
  • said step of outputting said execution result to a next level target comprises:
  • the method further comprises:
  • the method further comprises:
  • the method further comprises:
  • an acceleration apparatus for Layer 1 acceleration in C-RAN wherein, the acceleration apparatus comprises:
  • a receiving device for receiving a Layer 1 acceleration cloud task sent by a user
  • an assigning device for dividing said Layer 1 acceleration cloud task into at least one subtask, and assigning said at least one subtask to different queues according to priority setting;
  • a triggering device for triggering at least one worker to acquire the corresponding subtask from at least one queue and execute according to work setting and thereby obtaining execution result
  • an outputting device for outputting said execution result to a next level target.
  • said at least one worker is obtained through integrating system on chip (SOC) and/or digital signal processing (DSP) chips.
  • SOC system on chip
  • DSP digital signal processing
  • said next level target includes at least any one of the following:
  • RRU remote radio unit
  • said next level target includes at least one of said queues
  • the execution result obtained includes generation of at least one new task
  • said outputting device is for:
  • the acceleration apparatus further comprises:
  • a parsing device for parsing data packet corresponding to said Layer 1 acceleration cloud task and determining said priority setting.
  • the acceleration apparatus further comprises:
  • an adjusting device for adjusting said at least one worker’s execution support towards said queues according to assignment conditions of the subtasks in said queues and in connection with load conditions of said at least one worker.
  • the acceleration apparatus further comprises:
  • a judging device for acquiring load conditions of said at least one worker, and if the load of said worker is lower than a predetermined threshold, putting said worker to sleep or shutting it down.
  • the present disclosure carries out the following operations: receiving a Layer 1 acceleration cloud task sent by a user; dividing said Layer 1 acceleration cloud task into at least one subtask, and assigning said at least one subtask to different queues according to priority setting; triggering at least one worker to acquire the corresponding subtask from at least one queue and execute according to work setting and thereby obtaining execution result; outputting said execution result to a next level target.
  • Layer 1 processing is accelerated in a unified manner, which is cloudized, flexible and economical for Layer 1 processing.
  • the present disclosure can establish a Layer 1 processing resource pool.
  • Layer 1 processing can be divided into multiple small stateless jobs, such as CRC (Cyclic Redundancy Check) , encoding, decoding, iFFT (Inverse Fast Fourier Transform) , FFT (Fast Fourier Transform) and so on.
  • CRC Cyclic Redundancy Check
  • encoding e.g., arithmetic Coding
  • decoding e.g., arithmetic Coding
  • iFFT Inverse Fast Fourier Transform
  • FFT Fast Fourier Transform
  • Some converters are introduced, such as CPRI (Common Public Radio Interface) converters and so on, in order to assign antenna data to different RRUs (Remote Radio Unit) .
  • CPRI Common Public Radio Interface
  • RRU Remote Radio Unit
  • Fig. 1 illustrates the flowchart of a method for Layer 1 acceleration in C-RAN in accordance with an aspect of the present disclosure
  • Fig. 2 illustrates the architectural diagram of Layer 1 acceleration in C-RAN in accordance with a preferred embodiment of the present disclosure
  • Figs. 3 to 6 illustrate the schematic diagrams of Layer 1 acceleration in C-RAN in accordance with another preferred embodiment of the present disclosure
  • Fig. 7 illustrates the schematic diagram of Layer 1 acceleration in C-RAN in accordance with yet another preferred embodiment of the present disclosure
  • Fig. 8 illustrates the schematic diagram of Layer 1 acceleration in C-RAN in accordance with yet another preferred embodiment of the present disclosure
  • Fig. 9 illustrates the schematic diagram of Layer 1 acceleration in C-RAN in accordance with yet another preferred embodiment of the present disclosure.
  • base station used here may be regarded as synonymous to the following items and sometimes referred to as the following items infra: node B, evolved-type node B, eNodeB, eNB, transceiver base station (BTS) , RNC, etc. and may describe a transceiver communicating with the mobile terminal and providing a wireless resource in a wireless communication network crossing a plurality of technical generations. Except the capabilities of implementing the method discussed above, the base station as discussed may have all functions associated with traditional well-known base stations.
  • the methods discussed infra may be implemented through hardware, software, firmware, middleware, microcode, hardware description language or any combination thereof.
  • the program code or code segment for executing essential tasks may be stored in a machine or a computer readable medium (e.g., storage medium) .
  • processors may implement essential tasks.
  • Fig. 1 illustrates the flowchart of a method for Layer 1 acceleration in C-RAN in accordance with an aspect of the present disclosure.
  • the method includes steps S101, S102, S103 and S104.
  • step S101 acceleration apparatus 1 receives a Layer 1 acceleration cloud task sent by a user.
  • Fig. 2 illustrates the architectural diagram of Layer 1 acceleration in C-RAN in accordance with a preferred embodiment of the present disclosure.
  • the architecture includes therein one job entrance, a number N of queues and a number N of workers corresponding to the N queues.
  • the outputs of the workers can go to antennas, antenna data converters, receivers, or go back into the queues.
  • acceleration apparatus 1 receives the Layer 1 acceleration cloud task sent by the user through an agreed way of communication.
  • the Layer 1 acceleration cloud task can be a simple job, such as attaching CRC (Cyclic Redundancy Check) , or can be a large job, for example including all the processing from attaching CRC to OFDM (Orthogonal Frequency Division Multiplexing) signal generation.
  • the Layer 1 acceleration cloud task can further include encoding, decoding, iFFT (Inverse Fast Fourier Transform) , FFT (Fast Fourier Transform) and other job.
  • the Layer 1 acceleration cloud task can be for example execution result, data packet and so on from Layer 2.
  • Layer 1 acceleration cloud task is merely for example and not intended to be in fact limiting of the present disclosure, and other Layer 1 acceleration cloud tasks that are now existing or might later come into being, if applicable to the present disclosure, should also be included within the protection scope of the present disclosure and incorporated herein by reference.
  • acceleration apparatus 1 divides said Layer 1 acceleration cloud task into at least one subtask, and assigns said at least one subtask to different queues according to priority setting.
  • acceleration apparatus 1 divides the Layer 1 acceleration cloud task received from the user into at least one subtask.
  • the Layer 1 acceleration cloud task is divided into multiple small stateless jobs, such as CRC (Cyclic Redundancy Check) , encoding, decoding, iFFT (Inverse Fast Fourier Transform) , FFT (Fast Fourier Transform) and other subtasks.
  • CRC Cyclic Redundancy Check
  • encoding encoding
  • decoding decoding
  • iFFT Inverse Fast Fourier Transform
  • FFT Fast Fourier Transform
  • Layer 1 operations can be irrelated to user, and only related to encoding, resource mapping and description in the Layer 1 acceleration cloud task.
  • acceleration apparatus 1 assigns said at least one subtask to different queues according to priority setting. For example, assuming that the Layer 1 acceleration cloud task received by acceleration apparatus 1 from the job entrance is divided into 6 subtasks, acceleration apparatus 1 assigns the 6 subtasks to 6 different queues according to priority setting. Or acceleration apparatus 1 can assign the 6 subtasks to 4 different queues according to priority setting, wherein 2 of the queues have 2 subtasks respectively.
  • one queue can be assigned with multiple subtasks, or can be assigned with only one subtask.
  • acceleration apparatus 1 assigns a subtask of high priority to a queue of high priority, or assigns the subtask of high priority to a corresponding queue with a high-performance worker, or assigns the subtask of high priority to a corresponding queue with a greater number of workers, or carries out full mapping on the subtask of high priority.
  • the method further includes a step S105 (not shown) .
  • step S105 acceleration apparatus 1 parses the data packet corresponding to said Layer 1 acceleration cloud task, and determines said priority setting. Thereafter, in step S102, acceleration apparatus 1 assigns the at least one subtask to different queues according to the priority setting determined in step S105.
  • acceleration apparatus 1 parses the data packet corresponding to the Layer 1 acceleration cloud task.
  • the priority setting of the Layer 1 acceleration cloud task is defined in the description file of the data packet, and acceleration apparatus 1 determines the priorities of the respective subtasks corresponding to the Layer 1 acceleration cloud task through parsing the data packet, i.e. determining which subtasks will be assigned to which queues.
  • the Layer 1 acceleration cloud task includes the data packet as execution result from Layer 2.
  • acceleration apparatus 1 parses the data packet corresponding to the Layer 1 acceleration cloud task, and obtains information about priority setting through the description file of the data packet.
  • acceleration apparatus 1 receives a Layer 1 acceleration cloud task from the job entrance, wherein the Layer 1 acceleration cloud task contains for example 6 subtasks; in step S105, acceleration apparatus 1 parses the data packet corresponding to the Layer 1 acceleration cloud task and determines the related priority setting, for example, the priority setting indicates that the 6 subtasks contained in the Layer 1 acceleration cloud task have the same priority without one of the subtasks having higher or lower priority; then in step S102, acceleration apparatus 1 divides the Layer 1 acceleration cloud task into 6 subtasks and assigns the 6 subtasks to 6 different queues according to the priority setting, wherein the 6 different queues can have the same priority.
  • acceleration apparatus 1 triggers at least one worker to acquire the corresponding subtask from at least one queue and execute according to work setting and thereby obtaining execution result.
  • acceleration apparatus 1 triggers at least one worker to acquire the corresponding subtask from at least one queue and execute according to work setting.
  • the work setting of worker 1 is to acquire tasks from queue 1, queue 2 and queue 3.
  • acceleration apparatus 1 triggers worker 1 to acquire the corresponding subtasks from queue 1, queue 2 and queue 3 and execute according to the work setting and thereby obtaining execution result.
  • one worker can be configured to acquire tasks from one or more queues.
  • the respective queues can correspond to different priorities. For example, the abovementioned queue 1, queue 2 and queue 3 are ranked in priority from high to low. Then worker 1 can also be configured to acquire the corresponding subtasks in turn from queue 1, queue 2 and queue 3 according to priority from high to low.
  • said at least one worker is obtained through integrating system on chip (SOC) and/or digital signal processing (DSP) chips.
  • SOC system on chip
  • DSP digital signal processing
  • step S103 acceleration apparatus 1 triggers at least one of these workers to acquire the corresponding subtask from at least one queue and execute according to work setting and thereby obtaining execution result.
  • step S104 acceleration apparatus 1 outputs said execution result to a next level target.
  • acceleration apparatus 1 outputs execution result to a next level target, or triggers the abovementioned respective workers to output their execution results to a next level target.
  • the output result of a worker is outputted to an antenna or antenna data converter.
  • their execution results are outputted to an antenna data converter, and then outputted from the antenna data converter to antennas.
  • the output result of a worker is outputted to a receiver monitoring the execution result.
  • worker N in Fig. 2 its execution result can be outputted to an antenna, or can also be outputted to a receiver.
  • said next level target includes at least any one of the following:
  • RRU remote radio unit
  • next level target is merely for example and not intended to be in fact limiting of the present disclosure, and other next level targets that are now existing or might later come into being, if applicable to the present disclosure, should also be included within the protection scope of the present disclosure and incorporated herein by reference.
  • Figs. 3 to 6 illustrate the schematic diagrams of Layer 1 acceleration in C-RAN in accordance with another preferred embodiment of the present disclosure.
  • the Layer 1 acceleration cloud task sent by the user includes 6 subtasks.
  • the Layer 1 acceleration cloud task is divided into 6 subtasks, wherein subtask 1 and subtask 2 are assigned to queue 1, subtask 3 is assigned to queue 2, subtask 4 is assigned to queue 3, and subtask 5 and subtask 6 are assigned to queue N.
  • worker 1 acquires subtask 1 from queue 1 and executes according to work setting
  • worker 2 acquires subtask 3 from queue 2 and executes according to work setting
  • worker N acquires subtask 5 from queue N and executes.
  • worker 1 outputs execution result 1 obtained through executing subtask 1 via the antenna data converter to an antenna, and worker 1 continues to acquire subtask 2 from queue 1 and execute; worker 2 outputs execution result 3 obtained through executing subtask 3 via the antenna data converter to another antenna, and worker 2 continues to acquire subtask 4 from queue 3 and execute; worker N outputs execution result 5 obtained through executing subtask 5 to another antenna, and worker N continues to acquire subtask 6 from queue N and execute.
  • the execution result obtained includes generation of at least one new task.
  • acceleration apparatus 1 assigns said at least one new task to at least one of said queues.
  • step S103 the worker acquires the corresponding subtask from the queue and executes, and the execution result obtained can be generation of at least one new task; in step S104, acceleration apparatus 1 assigns the at least one new task to at least one of the queues, and at least one of the workers then acquires the new task from the queue and executes.
  • worker 1 acquires subtask 2 from queue 1 and executes, the execution result obtained being a new task 2’ , and the new task 2’ is assigned to queue 1.
  • the new task 2’ later can continue to be acquired and executed by one of the workers, and of course the new task 2’ can also be assigned to other queues, for example can also be assigned to queue 2.
  • Other workers such as worker 2, acquire subtask 4 from queue 3 and execute, and output execution result 4 via the antenna data converter to an antenna.
  • Worker N acquires subtask 6 from queue N and executes, and outputs execution result 6 to a receiver monitoring the execution result.
  • the method further includes a step S106 (not shown) .
  • step S106 acceleration apparatus 1 adjusts said at least one worker’s execution support towards said queues according to the assignment conditions of the subtasks in said queues and in connection with the load conditions of said at least one worker.
  • acceleration apparatus 1 can learn the assignment conditions of the subtasks in the respective queues and the load conditions of the respective workers. For example, acceleration apparatus 1 learns the following conditions: how many subtasks are assigned to the respective queues, the priorities of the respective queues, from which queues will the respective workers acquire subtasks and execute according to work setting, etc. Acceleration apparatus 1 adjusts the respective workers’ execution support towards the queues according to the assignment conditions of the subtasks in these queues and in connection with the load conditions of the respective workers. For example, when some queue is assigned with multiple subtasks and some worker is in idle state, acceleration apparatus 1 can trigger that worker to acquire the subtasks in that queue and execute, thereby relieving the load pressure on other workers and balancing the loads among the respective workers.
  • subtasks 1 to 6 are all assigned to queue 2.
  • worker 1 and worker 2 are configured to acquire the subtasks in queue 2 and execute.
  • the method further includes a step S107 (not shown) .
  • step S107 acceleration apparatus 1 acquires the load conditions of said at least one worker, and puts said worker to sleep or shuts it down if the load of said worker is lower than a predetermined threshold.
  • acceleration apparatus 1 can acquire the load conditions of the respective workers, and if the load of some worker is lower than a predetermined threshold, acceleration apparatus 1 can put that worker to sleep or shut it down.
  • the predetermined threshold can be preset by the system to judge the load conditions of the worker, or can also be adjusted according to actual conditions.
  • an acceleration apparatus for Layer 1 acceleration in C-RAN comprising: a receiving device 201 (not shown) , for receiving a Layer 1 acceleration cloud task sent by a user; an assigning device 202 (not shown) , for dividing said Layer 1 acceleration cloud task into at least one subtask, and assigning said at least one subtask to different queues according to priority setting; a triggering device 203 (not shown) , for triggering at least one worker to acquire the corresponding subtask from at least one queue and execute according to work setting and thereby obtaining execution result; an outputting device 204 (not shown) , for outputting said execution result to a next level target.
  • Receiving device 201 receives a Layer 1 acceleration cloud task sent by a user.
  • Fig. 2 illustrates the architectural diagram of Layer 1 acceleration in C-RAN in accordance with a preferred embodiment of the present disclosure.
  • the architecture includes therein one job entrance, a number N of queues and a number N of workers corresponding to the N queues.
  • the outputs of the workers can go to antennas, antenna data converters, receivers, or go back into the queues.
  • the Layer 1 acceleration cloud task can be a simple job, such as attaching CRC (Cyclic Redundancy Check) , or can be a large job, for example including all the processing from attaching CRC to OFDM (Orthogonal Frequency Division Multiplexing) signal generation.
  • the Layer 1 acceleration cloud task can further include encoding, decoding, iFFT (Inverse Fast Fourier Transform) , FFT (Fast Fourier Transform) and other job.
  • the Layer 1 acceleration cloud task can be for example execution result, data packet and so on from Layer 2.
  • Layer 1 acceleration cloud task is merely for example and not intended to be in fact limiting of the present disclosure, and other Layer 1 acceleration cloud tasks that are now existing or might later come into being, if applicable to the present disclosure, should also be included within the protection scope of the present disclosure and incorporated herein by reference.
  • Assigning device 202 divides said Layer 1 acceleration cloud task into at least one subtask, and assigns said at least one subtask to different queues according to priority setting.
  • assigning device 202 divides the Layer 1 acceleration cloud task received from the user into at least one subtask.
  • the Layer 1 acceleration cloud task is divided into multiple small stateless jobs, such as CRC (Cyclic Redundancy Check) , encoding, decoding, iFFT (Inverse Fast Fourier Transform) , FFT (Fast Fourier Transform) and other subtasks.
  • CRC Cyclic Redundancy Check
  • encoding encoding
  • decoding decoding
  • iFFT Inverse Fast Fourier Transform
  • FFT Fast Fourier Transform
  • Layer 1 operations can be irrelated to user, and only related to encoding, resource mapping and description in the Layer 1 acceleration cloud task.
  • assigning device 202 assigns said at least one subtask to different queues according to priority setting. For example, assuming that the Layer 1 acceleration cloud task received by receiving device 201 from the job entrance is divided into 6 subtasks, assigning device 202 assigns the 6 subtasks to 6 different queues according to priority setting. Or acceleration apparatus 1 can assign the 6 subtasks to 4 different queues according to priority setting, wherein 2 of the queues have 2 subtasks respectively.
  • one queue can be assigned with multiple subtasks, or can be assigned with only one subtask.
  • assigning device 202 assigns a subtask of high priority to a queue of high priority, or assigns the subtask of high priority to a corresponding queue with a high-performance worker, or assigns the subtask of high priority to a corresponding queue with a greater number of workers, or carries out full mapping on the subtask of high priority.
  • acceleration apparatus 1 further comprises a parsing device 205 (not shown) .
  • Parsing device 205 parses the data packet corresponding to said Layer 1 acceleration cloud task, and determines said priority setting. Thereafter, assigning device 202 assigns the at least one subtask to different queues according to the priority setting determined by parsing device 205.
  • parsing device 205 parses the data packet corresponding to the Layer 1 acceleration cloud task.
  • the priority setting of the Layer 1 acceleration cloud task is defined in the description file of the data packet, and parsing device 205 determines the priorities of the respective subtasks corresponding to the Layer 1 acceleration cloud task through parsing the data packet, i.e. determining which subtasks will be assigned to which queues.
  • the Layer 1 acceleration cloud task includes the data packet as execution result from Layer 2.
  • Parsing device 205 parses the data packet corresponding to the Layer 1 acceleration cloud task, and obtains information about priority setting through the description file of the data packet.
  • receiving device 201 receives a Layer 1 acceleration cloud task from the job entrance, wherein the Layer 1 acceleration cloud task contains for example 6 subtasks; parsing device 205 parses the data packet corresponding to the Layer 1 acceleration cloud task and determines the related priority setting, for example, the priority setting indicates that the 6 subtasks contained in the Layer 1 acceleration cloud task have the same priority without one of the subtasks having higher or lower priority; then assigning device 202 divides the Layer 1 acceleration cloud task into 6 subtasks and assigns the 6 subtasks to 6 different queues according to the priority setting, wherein the 6 different queues can have the same priority.
  • Triggering device 203 triggers at least one worker to acquire the corresponding subtask from at least one queue and execute according to work setting and thereby obtaining execution result.
  • triggering device 203 triggers at least one worker to acquire the corresponding subtask from at least one queue and execute according to work setting.
  • the work setting of worker 1 is to acquire tasks from queue 1, queue 2 and queue 3.
  • triggering device 203 triggers worker 1 to acquire the corresponding subtasks from queue 1, queue 2 and queue 3 and execute according to the work setting and thereby obtaining execution result.
  • one worker can be configured to acquire tasks from one or more queues.
  • the respective queues can correspond to different priorities. For example, the abovementioned queue 1, queue 2 and queue 3 are ranked in priority from high to low. Then worker 1 can also be configured to acquire the corresponding subtasks in turn from queue 1, queue 2 and queue 3 according to priority from high to low.
  • said at least one worker is obtained through integrating system on chip (SOC) and/or digital signal processing (DSP) chips.
  • SOC system on chip
  • DSP digital signal processing
  • Triggering device 203 triggers at least one of these workers to acquire the corresponding subtask from at least one queue and execute according to work setting and thereby obtaining execution result.
  • Outputting device 204 outputs said execution result to a next level target.
  • outputting device 204 outputs execution result to a next level target, or triggers the abovementioned respective workers to output their execution results to a next level target.
  • the output result of a worker is outputted to an antenna or antenna data converter.
  • their execution results are outputted to an antenna data converter, and then outputted from the antenna data converter to antennas.
  • the output result of a worker is outputted to a receiver monitoring the execution result.
  • worker N in Fig. 2 its execution result can be outputted to an antenna, or can also be outputted to a receiver.
  • said next level target includes at least any one of the following:
  • RRU remote radio unit
  • next level target is merely for example and not intended to be in fact limiting of the present disclosure, and other next level targets that are now existing or might later come into being, if applicable to the present disclosure, should also be included within the protection scope of the present disclosure and incorporated herein by reference.
  • Figs. 3 to 6 illustrate the schematic diagrams of Layer 1 acceleration in C-RAN in accordance with another preferred embodiment of the present disclosure.
  • the Layer 1 acceleration cloud task sent by the user includes 6 subtasks.
  • the Layer 1 acceleration cloud task is divided into 6 subtasks, wherein subtask 1 and subtask 2 are assigned to queue 1, subtask 3 is assigned to queue 2, subtask 4 is assigned to queue 3, and subtask 5 and subtask 6 are assigned to queue N.
  • worker 1 acquires subtask 1 from queue 1 and executes according to work setting
  • worker 2 acquires subtask 3 from queue 2 and executes according to work setting
  • worker N acquires subtask 5 from queue N and executes.
  • worker 1 outputs execution result 1 obtained through executing subtask 1 via the antenna data converter to an antenna, and worker 1 continues to acquire subtask 2 from queue 1 and execute; worker 2 outputs execution result 3 obtained through executing subtask 3 via the antenna data converter to another antenna, and worker 2 continues to acquire subtask 4 from queue 3 and execute; worker N outputs execution result 5 obtained through executing subtask 5 to another antenna, and worker N continues to acquire subtask 6 from queue N and execute.
  • the execution result obtained includes generation of at least one new task.
  • Outputting device 204 assigns said at least one new task to at least one of said queues.
  • triggering device 203 triggers the worker acquires the corresponding subtask from the queue and executes, and the execution result obtained can be generation of at least one new task; outputting device 204 assigns the at least one new task to at least one of the queues, and at least one of the workers then acquires the new task from the queue and executes.
  • worker 1 acquires subtask 2 from queue 1 and executes, the execution result obtained being a new task 2’ , and the new task 2’ is assigned to queue 1.
  • the new task 2’ later can continue to be acquired and executed by one of the workers, and of course the new task 2’ can also be assigned to other queues, for example can also be assigned to queue 2.
  • Other workers such as worker 2, acquire subtask 4 from queue 3 and execute, and output execution result 4 via the antenna data converter to an antenna.
  • Worker N acquires subtask 6 from queue N and executes, and outputs execution result 6 to a receiver monitoring the execution result.
  • acceleration apparatus 1 further comprises an adjusting device 206 (not shown) .
  • Adjusting device 206 adjusts said at least one worker’s execution support towards said queues according to the assignment conditions of the subtasks in said queues and in connection with the load conditions of said at least one worker.
  • adjusting device 206 can learn the assignment conditions of the subtasks in the respective queues and the load conditions of the respective workers. For example, adjusting device 206 learns the following conditions: how many subtasks are assigned to the respective queues, the priorities of the respective queues, from which queues will the respective workers acquire subtasks and execute according to work setting, etc. Adjusting device 206 adjusts the respective workers’ execution support towards the queues according to the assignment conditions of the subtasks in these queues and in connection with the load conditions of the respective workers. For example, when some queue is assigned with multiple subtasks and some worker is in idle state, adjusting device 206 can trigger that worker to acquire the subtasks in that queue and execute, thereby relieving the load pressure on other workers and balancing the loads among the respective workers.
  • subtasks 1 to 6 are all assigned to queue 2.
  • worker 1 and worker 2 are configured to acquire the subtasks in queue 2 and execute.
  • acceleration apparatus 1 further comprises a judging device 207 (not shown) .
  • Judging device 207 acquires the load conditions of said at least one worker, and puts said worker to sleep or shuts it down if the load of said worker is lower than a predetermined threshold.
  • judging device 207 can acquire the load conditions of the respective workers, and if the load of some worker is lower than a predetermined threshold, judging device 207 can put that worker to sleep or shut it down.
  • the predetermined threshold can be preset by the system to judge the load conditions of the worker, or can also be adjusted according to actual conditions.
  • the present disclosure may be implemented in software or a combination of software and hardware; for example, it may be implemented by a dedicated integrated circuit (ASIC) , a general-purpose computer, or any other similar hardware device.
  • the software program of the present disclosure may be executed by a processor so as to implement the above steps or functions.
  • the software program of the present disclosure (including relevant data structure) may be stored in a computer readable recording medium, for example, a RAM memory, a magnetic or optical driver, or a floppy disk, and similar devices.
  • some steps of functions of the present disclosure may be implemented by hardware, for example, a circuit cooperating with the processor to execute various functions or steps.
  • a part of the present invention may be applied as a computer program product, for example, a computer program instruction, which, when executed by a computer, through the operation of the computer, may invoke or provide the method and/or technical solution of the present invention.
  • the program instruction invoking the method of the present invention may be stored in a fixed or mobile recording medium, and/or transmitted through a data stream in broadcast or other signal carrier medium, and/or stored in a working memory of a computer device running according to the program instruction.
  • one embodiment according to the present invention comprises an apparatus that includes a memory for storing computer program instructions and a processor for executing program instructions, wherein when the computer program instructions are executed by the processor, the apparatus is triggered to run the methods and/or technical solutions based on the previously mentioned multiple embodiments of the present invention.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Mobile Radio Communication Systems (AREA)

Abstract

The objective of the present disclosure is to provide a method and apparatus for Layer 1 acceleration in C-RAN. Compared with the prior art, the present disclosure carries out the following operations: receiving a Layer 1 acceleration cloud task sent by a user; dividing said Layer 1 acceleration cloud task into at least one subtask, and assigning said at least one subtask to different queues according to priority setting; triggering at least one worker to acquire the corresponding subtask from at least one queue and execute according to work setting and thereby obtaining execution result; outputting said execution result to a next level target. Layer 1 processing is accelerated in a unified manner, which is cloudized, flexible and economical for Layer 1 processing.

Description

A Method and Apparatus for Layer 1 Acceleration in C-RAN Technical Field
The present disclosure relates to the field of communication technologies, in particular relates to a technique for Layer 1 acceleration in C-RAN.
Background Art
The goal of C-RAN (Cloud-Radio Access Network) is to move all the baseband processing to a cloud computing environment. Part of the baseband processing, such as control plane and management plane, can be easily moved to the cloud computing environment, because they don’t have strict requirements on time and also don’t involve specialized operations.
However, regarding Layer 1 user plane, since the operations such as FFT (Fast Fourier Transform) /iFFT (Inverse Fast Fourier Transform) are computing intensive, specialized SOC (System On Chip) /DSP (Digital Signal Processing) chips are usually employed in traditional base stations for acceleration. There are no corresponding acceleration instructions on general-purpose processors such as X86, and therefore Layer 1 user plane data operations can not be carried out effectively.
In current C-RAN, Layer 1 user plane is still located in RAP (Radio Access Point) , and uses DSP chips for acceleration of Layer 1 processing. However, RAP is dedicated to the specified antenna and specified VNF (Virtualized network function) , and therefore can not be shared by all users and can not be scaled in volume. Layer 1 processing is not cloudized.
There are some ideas in the prior art about Layer 1 acceleration in a cloud environment, such as:
employing SIMD (Single Instruction Multiple Data) instructions in Xeon CPUs to optimize code;
employing Xeon Phi’s pure software to carry out Layer 1 computing;
employing DSP chips to fabricate acceleration boards and employing PCIe (Peripheral Component Interface Express, which is a bus and interface  standard) interfaces to connect to cloud servers.
However, the abovementioned ideas have disadvantages both in efficiency and in flexibility.
Summary of the Disclosure
The objective of the present disclosure is to provide a method and apparatus for Layer 1 acceleration in C-RAN.
According to one aspect of the present disclosure, there is provided a method for Layer 1 acceleration in C-RAN, wherein, the method comprises:
receiving a Layer 1 acceleration cloud task sent by a user;
dividing said Layer 1 acceleration cloud task into at least one subtask, and assigning said at least one subtask to different queues according to priority setting;
triggering at least one worker to acquire the corresponding subtask from at least one queue and execute according to work setting and thereby obtaining execution result;
outputting said execution result to a next level target.
Preferably, said at least one worker is obtained through integrating system on chip (SOC) and/or digital signal processing (DSP) chips.
Preferably, said next level target includes at least any one of the following:
an antenna;
an antenna data converter;
a remote radio unit (RRU) ;
a receiver monitoring said execution result;
at least one of said queues.
More preferably, said next level target includes at least one of said queues, the execution result obtained includes generation of at least one new task, and said step of outputting said execution result to a next level target comprises:
assigning said at least one new task to at least one of said queues.
Preferably, the method further comprises:
parsing data packet corresponding to said Layer 1 acceleration cloud task and determining said priority setting.
Preferably, the method further comprises:
adjusting said at least one worker’s execution support towards said queues according to assignment conditions of the subtasks in said queues and in connection with load conditions of said at least one worker.
Preferably, the method further comprises:
acquiring load conditions of said at least one worker, and if the load of said worker is lower than a predetermined threshold, putting said worker to sleep or shutting it down.
According to another aspect of the present disclosure, there is provided an acceleration apparatus for Layer 1 acceleration in C-RAN, wherein, the acceleration apparatus comprises:
a receiving device, for receiving a Layer 1 acceleration cloud task sent by a user;
an assigning device, for dividing said Layer 1 acceleration cloud task into at least one subtask, and assigning said at least one subtask to different queues according to priority setting;
a triggering device, for triggering at least one worker to acquire the corresponding subtask from at least one queue and execute according to work setting and thereby obtaining execution result;
an outputting device, for outputting said execution result to a next level target.
Preferably, said at least one worker is obtained through integrating system on chip (SOC) and/or digital signal processing (DSP) chips.
Preferably, said next level target includes at least any one of the following:
an antenna;
an antenna data converter;
a remote radio unit (RRU) ;
a receiver monitoring said execution result;
at least one of said queues.
More preferably, said next level target includes at least one of said queues, the execution result obtained includes generation of at least one new task, and said outputting device is for:
assigning said at least one new task to at least one of said queues.
Preferably, the acceleration apparatus further comprises:
a parsing device, for parsing data packet corresponding to said Layer 1 acceleration cloud task and determining said priority setting.
Preferably, the acceleration apparatus further comprises:
an adjusting device, for adjusting said at least one worker’s execution support towards said queues according to assignment conditions of the subtasks in said queues and in connection with load conditions of said at least one worker.
Preferably, the acceleration apparatus further comprises:
a judging device, for acquiring load conditions of said at least one worker, and if the load of said worker is lower than a predetermined threshold, putting said worker to sleep or shutting it down.
Compared with the prior art, the present disclosure carries out the following operations: receiving a Layer 1 acceleration cloud task sent by a user; dividing said Layer 1 acceleration cloud task into at least one subtask, and assigning said at least one subtask to different queues according to priority setting; triggering at least one worker to acquire the corresponding subtask from at least one queue and execute according to work setting and thereby obtaining execution result; outputting said execution result to a next level target. Layer 1 processing is accelerated in a unified manner, which is cloudized, flexible and economical for Layer 1 processing.
Furthermore, through integrating current SOC/DSP chips together as a heterogeneous cloud environment, the present disclosure can establish a Layer 1 processing resource pool.
For users a unified stateless Layer 1 processing service interface is provided, wherein Layer 1 processing can be divided into multiple small stateless jobs, such as CRC (Cyclic Redundancy Check) , encoding,  decoding, iFFT (Inverse Fast Fourier Transform) , FFT (Fast Fourier Transform) and so on.
For antenna data some converters are introduced, such as CPRI (Common Public Radio Interface) converters and so on, in order to assign antenna data to different RRUs (Remote Radio Unit) .
Description of the Drawings
Through reading the following detailed description of the non-limiting embodiments with reference to the following drawings, other features, objectives and advantages of the present disclosure will become more obvious:
Fig. 1 illustrates the flowchart of a method for Layer 1 acceleration in C-RAN in accordance with an aspect of the present disclosure;
Fig. 2 illustrates the architectural diagram of Layer 1 acceleration in C-RAN in accordance with a preferred embodiment of the present disclosure;
Figs. 3 to 6 illustrate the schematic diagrams of Layer 1 acceleration in C-RAN in accordance with another preferred embodiment of the present disclosure;
Fig. 7 illustrates the schematic diagram of Layer 1 acceleration in C-RAN in accordance with yet another preferred embodiment of the present disclosure;
Fig. 8 illustrates the schematic diagram of Layer 1 acceleration in C-RAN in accordance with yet another preferred embodiment of the present disclosure;
Fig. 9 illustrates the schematic diagram of Layer 1 acceleration in C-RAN in accordance with yet another preferred embodiment of the present disclosure.
The same or similar reference numbers in the drawings represent the same or similar parts.
Detailed Description of the Embodiments
The present disclosure will be described in more detail in the following with reference to the drawings.
The term “base station” used here may be regarded as synonymous to the following items and sometimes referred to as the following items infra: node B, evolved-type node B, eNodeB, eNB, transceiver base station (BTS) , RNC, etc. and may describe a transceiver communicating with the mobile terminal and providing a wireless resource in a wireless communication network crossing a plurality of technical generations. Except the capabilities of implementing the method discussed above, the base station as discussed may have all functions associated with traditional well-known base stations.
The methods discussed infra may be implemented through hardware, software, firmware, middleware, microcode, hardware description language or any combination thereof. When they are implemented with software, firmware, middleware or microcode, the program code or code segment for executing essential tasks may be stored in a machine or a computer readable medium (e.g., storage medium) . (One or more) processors may implement essential tasks.
The specific structures and function details disclosed here are only representative, for a purpose of describing the exemplary embodiments of the present invention. Instead, the present invention may be specifically implemented through many alternative embodiments. Therefore, it should not be appreciated that the present invention is only limited to the embodiments illustrated here. It should be understood that although terms “first, ” “second” might be used here to describe respective units, these units should not be limited by these terms. Use of these terms is only for distinguishing one unit from another. For example, without departing from the scope of the exemplary embodiments, the first unit may be referred to as the second unit, and  similarly the second unit may be referred to as the first unit. The term “and/or” used here includes any and all combinations of one or more associated items as listed.
It should be understood that when one unit is “connected” or “coupled” to a further unit, it may be directly connected or coupled to the further unit or an intermediate unit may exist. In contrast, when a unit is “directly connected” or “directly coupled” to a further unit, an intermediate unit does not exist. Other terms (e.g., “disposed between” VS. “directly disposed between, ” “adjacent to” VS. “immediately adjacent to, ” and the like) for describing a relationship between units should be interpreted in a similar manner.
The terms used here are only for describing preferred embodiments, not intended to limit exemplary embodiments. Unless otherwise indicated, singular forms “a” or “one” used here further intends to include plural forms. It should also be appreciated that the terms “comprise” and/or “include” used here prescribe existence of features, integers, steps, operations, units and/or components as stated, but do not exclude existence or addition of one or more other features, integers, steps, operations, units, components, and/or a combination thereof.
It should also be noted that in some alternative embodiments, the functions/actions as mentioned may occur in an order different from what is indicated in the drawings. For example, dependent on the functions/actions involved, two successively illustrated diagrams may be executed substantially simultaneously or in a reverse order sometimes.
Unless otherwise defined, all terms (including technical and scientific terms) used here have meanings identical to what are generally understood by those skilled in the art within the field of exemplary embodiments. It should also be understood that unless explicitly defined here, those terms defined in commonly-used dictionaries should be  interpreted to have meanings consistent in the context of relevant fields, and should not be interpreted according to ideal or too formal meanings.
Fig. 1 illustrates the flowchart of a method for Layer 1 acceleration in C-RAN in accordance with an aspect of the present disclosure.
The method includes steps S101, S102, S103 and S104.
In step S101, acceleration apparatus 1 receives a Layer 1 acceleration cloud task sent by a user.
In particular, the user sends the Layer 1 acceleration cloud task through the job entrance as shown in Fig. 2. Herein, Fig. 2 illustrates the architectural diagram of Layer 1 acceleration in C-RAN in accordance with a preferred embodiment of the present disclosure. The architecture includes therein one job entrance, a number N of queues and a number N of workers corresponding to the N queues. In addition, the outputs of the workers can go to antennas, antenna data converters, receivers, or go back into the queues.
In step S101, acceleration apparatus 1 receives the Layer 1 acceleration cloud task sent by the user through an agreed way of communication. The Layer 1 acceleration cloud task can be a simple job, such as attaching CRC (Cyclic Redundancy Check) , or can be a large job, for example including all the processing from attaching CRC to OFDM (Orthogonal Frequency Division Multiplexing) signal generation. The Layer 1 acceleration cloud task can further include encoding, decoding, iFFT (Inverse Fast Fourier Transform) , FFT (Fast Fourier Transform) and other job. The Layer 1 acceleration cloud task can be for example execution result, data packet and so on from Layer 2.
Those skilled in the art should be able to understand that the abovementioned Layer 1 acceleration cloud task is merely for example and not intended to be in fact limiting of the present disclosure, and other Layer 1 acceleration cloud tasks that are now existing or might later come into being, if applicable to the present disclosure, should also be included within the protection scope of the present disclosure and incorporated herein by  reference.
In step S102, acceleration apparatus 1 divides said Layer 1 acceleration cloud task into at least one subtask, and assigns said at least one subtask to different queues according to priority setting.
In particular, acceleration apparatus 1 divides the Layer 1 acceleration cloud task received from the user into at least one subtask. For example, the Layer 1 acceleration cloud task is divided into multiple small stateless jobs, such as CRC (Cyclic Redundancy Check) , encoding, decoding, iFFT (Inverse Fast Fourier Transform) , FFT (Fast Fourier Transform) and other subtasks.
The term “stateless” here means that there is no dependence on sequential order, and there is no dependence on whether the user establishes a context. Layer 1 operations can be irrelated to user, and only related to encoding, resource mapping and description in the Layer 1 acceleration cloud task.
Thereafter, acceleration apparatus 1 assigns said at least one subtask to different queues according to priority setting. For example, assuming that the Layer 1 acceleration cloud task received by acceleration apparatus 1 from the job entrance is divided into 6 subtasks, acceleration apparatus 1 assigns the 6 subtasks to 6 different queues according to priority setting. Or acceleration apparatus 1 can assign the 6 subtasks to 4 different queues according to priority setting, wherein 2 of the queues have 2 subtasks respectively. Herein, one queue can be assigned with multiple subtasks, or can be assigned with only one subtask.
Herein, acceleration apparatus 1 assigns a subtask of high priority to a queue of high priority, or assigns the subtask of high priority to a corresponding queue with a high-performance worker, or assigns the subtask of high priority to a corresponding queue with a greater number of workers, or carries out full mapping on the subtask of high priority.
Those skilled in the art should be able to understand that the abovementioned divided subtasks and the way they are being assigned are merely for example and not intended to be in fact limiting of the present  disclosure, and other subtasks and ways of assigning that are now existing or might later come into being, if applicable to the present disclosure, should also be included within the protection scope of the present disclosure and incorporated herein by reference.
Preferably, the method further includes a step S105 (not shown) . In step S105, acceleration apparatus 1 parses the data packet corresponding to said Layer 1 acceleration cloud task, and determines said priority setting. Thereafter, in step S102, acceleration apparatus 1 assigns the at least one subtask to different queues according to the priority setting determined in step S105.
In particular, in step S105, acceleration apparatus 1 parses the data packet corresponding to the Layer 1 acceleration cloud task. The priority setting of the Layer 1 acceleration cloud task is defined in the description file of the data packet, and acceleration apparatus 1 determines the priorities of the respective subtasks corresponding to the Layer 1 acceleration cloud task through parsing the data packet, i.e. determining which subtasks will be assigned to which queues.
For example, the Layer 1 acceleration cloud task includes the data packet as execution result from Layer 2. In step S105, acceleration apparatus 1 parses the data packet corresponding to the Layer 1 acceleration cloud task, and obtains information about priority setting through the description file of the data packet.
For example, assuming that in step S101, acceleration apparatus 1 receives a Layer 1 acceleration cloud task from the job entrance, wherein the Layer 1 acceleration cloud task contains for example 6 subtasks; in step S105, acceleration apparatus 1 parses the data packet corresponding to the Layer 1 acceleration cloud task and determines the related priority setting, for example, the priority setting indicates that the 6 subtasks contained in the Layer 1 acceleration cloud task have the same priority without one of the subtasks having higher or lower priority; then in step S102, acceleration apparatus 1 divides the Layer 1 acceleration cloud task into 6 subtasks and assigns the 6 subtasks to 6 different queues according to the priority setting,  wherein the 6 different queues can have the same priority.
In step S103, acceleration apparatus 1 triggers at least one worker to acquire the corresponding subtask from at least one queue and execute according to work setting and thereby obtaining execution result.
In particular, in step S103, acceleration apparatus 1 triggers at least one worker to acquire the corresponding subtask from at least one queue and execute according to work setting. For example, the work setting of worker 1 is to acquire tasks from queue 1, queue 2 and queue 3. Then in step S103, acceleration apparatus 1 triggers worker 1 to acquire the corresponding subtasks from queue 1, queue 2 and queue 3 and execute according to the work setting and thereby obtaining execution result. Herein, one worker can be configured to acquire tasks from one or more queues.
The respective queues can correspond to different priorities. For example, the abovementioned queue 1, queue 2 and queue 3 are ranked in priority from high to low. Then worker 1 can also be configured to acquire the corresponding subtasks in turn from queue 1, queue 2 and queue 3 according to priority from high to low.
Preferably, said at least one worker is obtained through integrating system on chip (SOC) and/or digital signal processing (DSP) chips.
Herein, through integrating current SOC and/or DSP chips together as a heterogeneous cloud environment, a Layer 1 processing resource pool can be established. From the user’s perspective, there is no need to distinguish between these SOC or DSP chips, and these pieces hardware can simply be viewed as individual workers. In step S103, acceleration apparatus 1 triggers at least one of these workers to acquire the corresponding subtask from at least one queue and execute according to work setting and thereby obtaining execution result.
In step S104, acceleration apparatus 1 outputs said execution result to a next level target.
In particular, in step S104, acceleration apparatus 1 outputs execution result to a next level target, or triggers the abovementioned respective workers to output their execution results to a next level target. For example,  the output result of a worker is outputted to an antenna or antenna data converter. As shown with worker 1 and worker 2 in Fig. 2, their execution results are outputted to an antenna data converter, and then outputted from the antenna data converter to antennas. Or the output result of a worker is outputted to a receiver monitoring the execution result. As shown with worker N in Fig. 2, its execution result can be outputted to an antenna, or can also be outputted to a receiver.
Preferably, said next level target includes at least any one of the following:
an antenna;
an antenna data converter;
a remote radio unit (RRU) ;
a receiver monitoring said execution result;
at least one of said queues.
Those skilled in the art should be able to understand that the abovementioned next level target is merely for example and not intended to be in fact limiting of the present disclosure, and other next level targets that are now existing or might later come into being, if applicable to the present disclosure, should also be included within the protection scope of the present disclosure and incorporated herein by reference.
Figs. 3 to 6 illustrate the schematic diagrams of Layer 1 acceleration in C-RAN in accordance with another preferred embodiment of the present disclosure.
In Fig. 3, the Layer 1 acceleration cloud task sent by the user includes 6 subtasks.
In Fig. 4, the Layer 1 acceleration cloud task is divided into 6 subtasks, wherein subtask 1 and subtask 2 are assigned to queue 1, subtask 3 is assigned to queue 2, subtask 4 is assigned to queue 3, and subtask 5 and subtask 6 are assigned to queue N.
In Fig. 5, worker 1 acquires subtask 1 from queue 1 and executes according to work setting; worker 2 acquires subtask 3 from queue 2 and executes according to work setting; worker N acquires subtask 5 from queue  N and executes.
In Fig. 6, worker 1 outputs execution result 1 obtained through executing subtask 1 via the antenna data converter to an antenna, and worker 1 continues to acquire subtask 2 from queue 1 and execute; worker 2 outputs execution result 3 obtained through executing subtask 3 via the antenna data converter to another antenna, and worker 2 continues to acquire subtask 4 from queue 3 and execute; worker N outputs execution result 5 obtained through executing subtask 5 to another antenna, and worker N continues to acquire subtask 6 from queue N and execute.
Preferably, when said next level target includes at least one of said queues, the execution result obtained includes generation of at least one new task. In step S104, acceleration apparatus 1 assigns said at least one new task to at least one of said queues.
In particular, in step S103, the worker acquires the corresponding subtask from the queue and executes, and the execution result obtained can be generation of at least one new task; in step S104, acceleration apparatus 1 assigns the at least one new task to at least one of the queues, and at least one of the workers then acquires the new task from the queue and executes.
For example, as shown in Fig. 7, worker 1 acquires subtask 2 from queue 1 and executes, the execution result obtained being a new task 2’ , and the new task 2’ is assigned to queue 1. The new task 2’ later can continue to be acquired and executed by one of the workers, and of course the new task 2’ can also be assigned to other queues, for example can also be assigned to queue 2. Other workers, such as worker 2, acquire subtask 4 from queue 3 and execute, and output execution result 4 via the antenna data converter to an antenna. Worker N acquires subtask 6 from queue N and executes, and outputs execution result 6 to a receiver monitoring the execution result.
Preferably, the method further includes a step S106 (not shown) . In step S106, acceleration apparatus 1 adjusts said at least one worker’s execution support towards said queues according to the assignment conditions of the subtasks in said queues and in connection with the load conditions of said at least one worker.
In particular, in step S106, acceleration apparatus 1 can learn the assignment conditions of the subtasks in the respective queues and the load conditions of the respective workers. For example, acceleration apparatus 1 learns the following conditions: how many subtasks are assigned to the respective queues, the priorities of the respective queues, from which queues will the respective workers acquire subtasks and execute according to work setting, etc. Acceleration apparatus 1 adjusts the respective workers’ execution support towards the queues according to the assignment conditions of the subtasks in these queues and in connection with the load conditions of the respective workers. For example, when some queue is assigned with multiple subtasks and some worker is in idle state, acceleration apparatus 1 can trigger that worker to acquire the subtasks in that queue and execute, thereby relieving the load pressure on other workers and balancing the loads among the respective workers.
For example, as shown in Fig. 8, subtasks 1 to 6 are all assigned to queue 2. According to the load conditions of the respective workers, worker 1 and worker 2 are configured to acquire the subtasks in queue 2 and execute.
Preferably, the method further includes a step S107 (not shown) . In step S107, acceleration apparatus 1 acquires the load conditions of said at least one worker, and puts said worker to sleep or shuts it down if the load of said worker is lower than a predetermined threshold.
In particular, in step S107, acceleration apparatus 1 can acquire the load conditions of the respective workers, and if the load of some worker is lower than a predetermined threshold, acceleration apparatus 1 can put that worker to sleep or shut it down. Herein, the predetermined threshold can be preset by the system to judge the load conditions of the worker, or can also be adjusted according to actual conditions.
For example, as shown in Fig. 9, when the load of worker N is lower than the predetermined threshold, worker N is put to sleep or shut down.
According to another aspect of the present disclosure, there is provided an acceleration apparatus for Layer 1 acceleration in C-RAN,  wherein, the acceleration apparatus 1 comprises: a receiving device 201 (not shown) , for receiving a Layer 1 acceleration cloud task sent by a user; an assigning device 202 (not shown) , for dividing said Layer 1 acceleration cloud task into at least one subtask, and assigning said at least one subtask to different queues according to priority setting; a triggering device 203 (not shown) , for triggering at least one worker to acquire the corresponding subtask from at least one queue and execute according to work setting and thereby obtaining execution result; an outputting device 204 (not shown) , for outputting said execution result to a next level target.
Receiving device 201 receives a Layer 1 acceleration cloud task sent by a user.
In particular, the user sends the Layer 1 acceleration cloud task through the job entrance as shown in Fig. 2. Herein, Fig. 2 illustrates the architectural diagram of Layer 1 acceleration in C-RAN in accordance with a preferred embodiment of the present disclosure. The architecture includes therein one job entrance, a number N of queues and a number N of workers corresponding to the N queues. In addition, the outputs of the workers can go to antennas, antenna data converters, receivers, or go back into the queues.
Receiving device 201 receives the Layer 1 acceleration cloud task sent by the user through an agreed way of communication. The Layer 1 acceleration cloud task can be a simple job, such as attaching CRC (Cyclic Redundancy Check) , or can be a large job, for example including all the processing from attaching CRC to OFDM (Orthogonal Frequency Division Multiplexing) signal generation. The Layer 1 acceleration cloud task can further include encoding, decoding, iFFT (Inverse Fast Fourier Transform) , FFT (Fast Fourier Transform) and other job. The Layer 1 acceleration cloud task can be for example execution result, data packet and so on from Layer 2.
Those skilled in the art should be able to understand that the abovementioned Layer 1 acceleration cloud task is merely for example and not intended to be in fact limiting of the present disclosure, and other Layer  1 acceleration cloud tasks that are now existing or might later come into being, if applicable to the present disclosure, should also be included within the protection scope of the present disclosure and incorporated herein by reference.
Assigning device 202 divides said Layer 1 acceleration cloud task into at least one subtask, and assigns said at least one subtask to different queues according to priority setting.
In particular, assigning device 202 divides the Layer 1 acceleration cloud task received from the user into at least one subtask. For example, the Layer 1 acceleration cloud task is divided into multiple small stateless jobs, such as CRC (Cyclic Redundancy Check) , encoding, decoding, iFFT (Inverse Fast Fourier Transform) , FFT (Fast Fourier Transform) and other subtasks.
The term “stateless” here means that there is no dependence on sequential order, and there is no dependence on whether the user establishes a context. Layer 1 operations can be irrelated to user, and only related to encoding, resource mapping and description in the Layer 1 acceleration cloud task.
Thereafter, assigning device 202 assigns said at least one subtask to different queues according to priority setting. For example, assuming that the Layer 1 acceleration cloud task received by receiving device 201 from the job entrance is divided into 6 subtasks, assigning device 202 assigns the 6 subtasks to 6 different queues according to priority setting. Or acceleration apparatus 1 can assign the 6 subtasks to 4 different queues according to priority setting, wherein 2 of the queues have 2 subtasks respectively. Herein, one queue can be assigned with multiple subtasks, or can be assigned with only one subtask.
Herein, assigning device 202 assigns a subtask of high priority to a queue of high priority, or assigns the subtask of high priority to a corresponding queue with a high-performance worker, or assigns the subtask of high priority to a corresponding queue with a greater number of workers, or carries out full mapping on the subtask of high priority.
Those skilled in the art should be able to understand that the abovementioned divided subtasks and the way they are being assigned are merely for example and not intended to be in fact limiting of the present disclosure, and other subtasks and ways of assigning that are now existing or might later come into being, if applicable to the present disclosure, should also be included within the protection scope of the present disclosure and incorporated herein by reference.
Preferably, acceleration apparatus 1 further comprises a parsing device 205 (not shown) . Parsing device 205 parses the data packet corresponding to said Layer 1 acceleration cloud task, and determines said priority setting. Thereafter, assigning device 202 assigns the at least one subtask to different queues according to the priority setting determined by parsing device 205.
In particular, parsing device 205 parses the data packet corresponding to the Layer 1 acceleration cloud task. The priority setting of the Layer 1 acceleration cloud task is defined in the description file of the data packet, and parsing device 205 determines the priorities of the respective subtasks corresponding to the Layer 1 acceleration cloud task through parsing the data packet, i.e. determining which subtasks will be assigned to which queues.
For example, the Layer 1 acceleration cloud task includes the data packet as execution result from Layer 2. Parsing device 205 parses the data packet corresponding to the Layer 1 acceleration cloud task, and obtains information about priority setting through the description file of the data packet.
For example, assuming that receiving device 201 receives a Layer 1 acceleration cloud task from the job entrance, wherein the Layer 1 acceleration cloud task contains for example 6 subtasks; parsing device 205 parses the data packet corresponding to the Layer 1 acceleration cloud task and determines the related priority setting, for example, the priority setting indicates that the 6 subtasks contained in the Layer 1 acceleration cloud task have the same priority without one of the subtasks having higher or lower priority; then assigning device 202 divides the Layer 1 acceleration cloud  task into 6 subtasks and assigns the 6 subtasks to 6 different queues according to the priority setting, wherein the 6 different queues can have the same priority.
Triggering device 203 triggers at least one worker to acquire the corresponding subtask from at least one queue and execute according to work setting and thereby obtaining execution result.
In particular, triggering device 203 triggers at least one worker to acquire the corresponding subtask from at least one queue and execute according to work setting. For example, the work setting of worker 1 is to acquire tasks from queue 1, queue 2 and queue 3. Then triggering device 203 triggers worker 1 to acquire the corresponding subtasks from queue 1, queue 2 and queue 3 and execute according to the work setting and thereby obtaining execution result. Herein, one worker can be configured to acquire tasks from one or more queues.
The respective queues can correspond to different priorities. For example, the abovementioned queue 1, queue 2 and queue 3 are ranked in priority from high to low. Then worker 1 can also be configured to acquire the corresponding subtasks in turn from queue 1, queue 2 and queue 3 according to priority from high to low.
Preferably, said at least one worker is obtained through integrating system on chip (SOC) and/or digital signal processing (DSP) chips.
Herein, through integrating current SOC and/or DSP chips together as a heterogeneous cloud environment, a Layer 1 processing resource pool can be established. From the user’s perspective, there is no need to distinguish between these SOC or DSP chips, and these pieces hardware can simply be viewed as individual workers. Triggering device 203 triggers at least one of these workers to acquire the corresponding subtask from at least one queue and execute according to work setting and thereby obtaining execution result.
Outputting device 204 outputs said execution result to a next level target.
In particular, outputting device 204 outputs execution result to a next  level target, or triggers the abovementioned respective workers to output their execution results to a next level target. For example, the output result of a worker is outputted to an antenna or antenna data converter. As shown with worker 1 and worker 2 in Fig. 2, their execution results are outputted to an antenna data converter, and then outputted from the antenna data converter to antennas. Or the output result of a worker is outputted to a receiver monitoring the execution result. As shown with worker N in Fig. 2, its execution result can be outputted to an antenna, or can also be outputted to a receiver.
Preferably, said next level target includes at least any one of the following:
an antenna;
an antenna data converter;
a remote radio unit (RRU) ;
a receiver monitoring said execution result;
at least one of said queues.
Those skilled in the art should be able to understand that the abovementioned next level target is merely for example and not intended to be in fact limiting of the present disclosure, and other next level targets that are now existing or might later come into being, if applicable to the present disclosure, should also be included within the protection scope of the present disclosure and incorporated herein by reference.
Figs. 3 to 6 illustrate the schematic diagrams of Layer 1 acceleration in C-RAN in accordance with another preferred embodiment of the present disclosure.
In Fig. 3, the Layer 1 acceleration cloud task sent by the user includes 6 subtasks.
In Fig. 4, the Layer 1 acceleration cloud task is divided into 6 subtasks, wherein subtask 1 and subtask 2 are assigned to queue 1, subtask 3 is assigned to queue 2, subtask 4 is assigned to queue 3, and subtask 5 and subtask 6 are assigned to queue N.
In Fig. 5, worker 1 acquires subtask 1 from queue 1 and executes  according to work setting; worker 2 acquires subtask 3 from queue 2 and executes according to work setting; worker N acquires subtask 5 from queue N and executes.
In Fig. 6, worker 1 outputs execution result 1 obtained through executing subtask 1 via the antenna data converter to an antenna, and worker 1 continues to acquire subtask 2 from queue 1 and execute; worker 2 outputs execution result 3 obtained through executing subtask 3 via the antenna data converter to another antenna, and worker 2 continues to acquire subtask 4 from queue 3 and execute; worker N outputs execution result 5 obtained through executing subtask 5 to another antenna, and worker N continues to acquire subtask 6 from queue N and execute.
Preferably, when said next level target includes at least one of said queues, the execution result obtained includes generation of at least one new task. Outputting device 204 assigns said at least one new task to at least one of said queues.
In particular, triggering device 203 triggers the worker acquires the corresponding subtask from the queue and executes, and the execution result obtained can be generation of at least one new task; outputting device 204 assigns the at least one new task to at least one of the queues, and at least one of the workers then acquires the new task from the queue and executes.
For example, as shown in Fig. 7, worker 1 acquires subtask 2 from queue 1 and executes, the execution result obtained being a new task 2’ , and the new task 2’ is assigned to queue 1. The new task 2’ later can continue to be acquired and executed by one of the workers, and of course the new task 2’ can also be assigned to other queues, for example can also be assigned to queue 2. Other workers, such as worker 2, acquire subtask 4 from queue 3 and execute, and output execution result 4 via the antenna data converter to an antenna. Worker N acquires subtask 6 from queue N and executes, and outputs execution result 6 to a receiver monitoring the execution result.
Preferably, acceleration apparatus 1 further comprises an adjusting device 206 (not shown) . Adjusting device 206 adjusts said at least one worker’s execution support towards said queues according to the assignment  conditions of the subtasks in said queues and in connection with the load conditions of said at least one worker.
In particular, adjusting device 206 can learn the assignment conditions of the subtasks in the respective queues and the load conditions of the respective workers. For example, adjusting device 206 learns the following conditions: how many subtasks are assigned to the respective queues, the priorities of the respective queues, from which queues will the respective workers acquire subtasks and execute according to work setting, etc. Adjusting device 206 adjusts the respective workers’ execution support towards the queues according to the assignment conditions of the subtasks in these queues and in connection with the load conditions of the respective workers. For example, when some queue is assigned with multiple subtasks and some worker is in idle state, adjusting device 206 can trigger that worker to acquire the subtasks in that queue and execute, thereby relieving the load pressure on other workers and balancing the loads among the respective workers.
For example, as shown in Fig. 8, subtasks 1 to 6 are all assigned to queue 2. According to the load conditions of the respective workers, worker 1 and worker 2 are configured to acquire the subtasks in queue 2 and execute.
Preferably, acceleration apparatus 1 further comprises a judging device 207 (not shown) . Judging device 207 acquires the load conditions of said at least one worker, and puts said worker to sleep or shuts it down if the load of said worker is lower than a predetermined threshold.
In particular, judging device 207 can acquire the load conditions of the respective workers, and if the load of some worker is lower than a predetermined threshold, judging device 207 can put that worker to sleep or shut it down. Herein, the predetermined threshold can be preset by the system to judge the load conditions of the worker, or can also be adjusted according to actual conditions.
For example, as shown in Fig. 9, when the load of worker N is lower than the predetermined threshold, worker N is put to sleep or shut down.
It should be noted that the present disclosure may be implemented in software or a combination of software and hardware; for example, it may be implemented by a dedicated integrated circuit (ASIC) , a general-purpose computer, or any other similar hardware device. In an embodiment, the software program of the present disclosure may be executed by a processor so as to implement the above steps or functions. Likewise, the software program of the present disclosure (including relevant data structure) may be stored in a computer readable recording medium, for example, a RAM memory, a magnetic or optical driver, or a floppy disk, and similar devices. Besides, some steps of functions of the present disclosure may be implemented by hardware, for example, a circuit cooperating with the processor to execute various functions or steps.
Besides, a part of the present invention may be applied as a computer program product, for example, a computer program instruction, which, when executed by a computer, through the operation of the computer, may invoke or provide the method and/or technical solution of the present invention. However, the program instruction invoking the method of the present invention may be stored in a fixed or mobile recording medium, and/or transmitted through a data stream in broadcast or other signal carrier medium, and/or stored in a working memory of a computer device running according to the program instruction. Here, one embodiment according to the present invention comprises an apparatus that includes a memory for storing computer program instructions and a processor for executing program instructions, wherein when the computer program instructions are executed by the processor, the apparatus is triggered to run the methods and/or technical solutions based on the previously mentioned multiple embodiments of the present  invention.
To those skilled in the art, it is apparent that the present disclosure is not limited to the details of the above exemplary embodiments, and the present disclosure may be implemented with other forms without departing from the spirit or basic features of the present disclosure. Thus, in any way, the embodiments should be regarded as exemplary, not limitative; the scope of the present disclosure is limited by the appended claims, instead of the above depiction. Thus, all variations intended to fall into the meaning and scope of equivalent elements of the claims should be covered within the present disclosure. No reference signs in the claims should be regarded as limiting the involved claims. Besides, it is apparent that the term “comprise/comprising/include/including” does not exclude other units or steps, and singularity does not exclude plurality. A plurality of units or means stated in the apparatus claims may also be implemented by a single unit or means through software or hardware. Terms such as the first and the second are used to indicate names, but do not indicate any particular sequence.

Claims (14)

  1. A method for Layer 1 acceleration in C-RAN, wherein, the method comprises:
    receiving a Layer 1 acceleration cloud task sent by a user;
    dividing said Layer 1 acceleration cloud task into at least one subtask, and assigning said at least one subtask to different queues according to priority setting;
    triggering at least one worker to acquire the corresponding subtask from at least one queue and execute according to work setting and thereby obtaining execution result;
    outputting said execution result to a next level target.
  2. The method according to Claim 1, wherein, said at least one worker is obtained through integrating system on chip (SOC) and/or digital signal processing (DSP) chips.
  3. The method according to Claim 1 or 2, wherein, said next level target includes at least any one of the following:
    an antenna;
    an antenna data converter;
    a remote radio unit (RRU) ;
    a receiver monitoring said execution result;
    at least one of said queues.
  4. The method according to Claim 3, wherein, said next level target includes at least one of said queues, the execution result obtained includes generation of at least one new task, and said step of outputting said execution result to a next level target comprises:
    assigning said at least one new task to at least one of said queues.
  5. The method according to any one of Claims 1 to 4, wherein, the method further comprises:
    parsing data packet corresponding to said Layer 1 acceleration cloud task and determining said priority setting.
  6. The method according to any one of Claims 1 to 5, wherein, the  method further comprises:
    adjusting said at least one worker’s execution support towards said queues according to assignment conditions of the subtasks in said queues and in connection with load conditions of said at least one worker.
  7. The method according to any one of Claims 1 to 6, wherein, the method further comprises:
    acquiring load conditions of said at least one worker, and if the load of said worker is lower than a predetermined threshold, putting said worker to sleep or shutting it down.
  8. An acceleration apparatus for Layer 1 acceleration in C-RAN, wherein, the acceleration apparatus comprises:
    a receiving device, for receiving a Layer 1 acceleration cloud task sent by a user;
    an assigning device, for dividing said Layer 1 acceleration cloud task into at least one subtask, and assigning said at least one subtask to different queues according to priority setting;
    a triggering device, for triggering at least one worker to acquire the corresponding subtask from at least one queue and execute according to work setting and thereby obtaining execution result;
    an outputting device, for outputting said execution result to a next level target.
  9. The acceleration apparatus according to Claim 8, wherein, said at least one worker is obtained through integrating system on chip (SOC) and/or digital signal processing (DSP) chips.
  10. The acceleration apparatus according to Claim 8 or 9, wherein, said next level target includes at least any one of the following:
    an antenna;
    an antenna data converter;
    a remote radio unit (RRU) ;
    a receiver monitoring said execution result;
    at least one of said queues.
  11. The acceleration apparatus according to Claim 10, wherein, said  next level target includes at least one of said queues, the execution result obtained includes generation of at least one new task, and said outputting device is for:
    assigning said at least one new task to at least one of said queues.
  12. The acceleration apparatus according to any one of Claims 8 to 11, wherein, the acceleration apparatus further comprises:
    a parsing device, for parsing data packet corresponding to said Layer 1 acceleration cloud task and determining said priority setting.
  13. The acceleration apparatus according to any one of Claims 8 to 12, wherein, the acceleration apparatus further comprises:
    an adjusting device, for adjusting said at least one worker’s execution support towards said queues according to assignment conditions of the subtasks in said queues and in connection with load conditions of said at least one worker.
  14. The acceleration apparatus according to any one of Claims 8 to 13, wherein, the acceleration apparatus further comprises:
    a judging device, for acquiring load conditions of said at least one worker, and if the load of said worker is lower than a predetermined threshold, putting said worker to sleep or shutting it down.
PCT/CN2019/100937 2018-08-17 2019-08-16 A method and apparatus for layer 1 acceleration in c-ran WO2020035043A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201810941435.4 2018-08-17
CN201810941435.4A CN110838990A (en) 2018-08-17 2018-08-17 Method and device for accelerating layer1 in C-RAN

Publications (1)

Publication Number Publication Date
WO2020035043A1 true WO2020035043A1 (en) 2020-02-20

Family

ID=69525198

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/100937 WO2020035043A1 (en) 2018-08-17 2019-08-16 A method and apparatus for layer 1 acceleration in c-ran

Country Status (2)

Country Link
CN (1) CN110838990A (en)
WO (1) WO2020035043A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112948124A (en) * 2021-03-26 2021-06-11 浪潮电子信息产业股份有限公司 Method, device and equipment for processing accelerated task and readable storage medium

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112611016B (en) * 2020-11-23 2023-01-31 青岛海尔空调电子有限公司 Multi-split multi-split outdoor unit communication method and multi-split unit

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140109102A1 (en) * 2012-10-12 2014-04-17 Nvidia Corporation Technique for improving performance in multi-threaded processing units
CN104540234A (en) * 2015-01-19 2015-04-22 西安电子科技大学 Associated task scheduling mechanism based on CoMP synchronization constraint in C-RAN framework
CN104571042A (en) * 2014-12-31 2015-04-29 深圳市进林科技有限公司 Complete-vehicle control method and complete-vehicle controller of intelligent vehicle
CN105224393A (en) * 2015-10-15 2016-01-06 西安电子科技大学 The scheduling virtual machine mechanism of a kind of JT-CoMP under C-RAN framework
CN105517176A (en) * 2015-12-03 2016-04-20 中国科学院计算技术研究所 Method for dynamic scheduling of resources of virtualized base station
CN106572500A (en) * 2016-10-21 2017-04-19 同济大学 Scheduling method of hardware accelerators in C-RAN

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102567086B (en) * 2010-12-30 2014-05-07 中国移动通信集团公司 Task scheduling method, equipment and system
CN104123185A (en) * 2013-04-28 2014-10-29 中国移动通信集团公司 Resource scheduling method, device and system
CN103501498B (en) * 2013-08-29 2017-09-26 中国科学院声学研究所 A kind of baseband processing resource distribution method and its device
CN103945548B (en) * 2014-04-29 2018-12-14 西安电子科技大学 Resource allocation system and task/business scheduling method in a kind of C-RAN network

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140109102A1 (en) * 2012-10-12 2014-04-17 Nvidia Corporation Technique for improving performance in multi-threaded processing units
CN104571042A (en) * 2014-12-31 2015-04-29 深圳市进林科技有限公司 Complete-vehicle control method and complete-vehicle controller of intelligent vehicle
CN104540234A (en) * 2015-01-19 2015-04-22 西安电子科技大学 Associated task scheduling mechanism based on CoMP synchronization constraint in C-RAN framework
CN105224393A (en) * 2015-10-15 2016-01-06 西安电子科技大学 The scheduling virtual machine mechanism of a kind of JT-CoMP under C-RAN framework
CN105517176A (en) * 2015-12-03 2016-04-20 中国科学院计算技术研究所 Method for dynamic scheduling of resources of virtualized base station
CN106572500A (en) * 2016-10-21 2017-04-19 同济大学 Scheduling method of hardware accelerators in C-RAN

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112948124A (en) * 2021-03-26 2021-06-11 浪潮电子信息产业股份有限公司 Method, device and equipment for processing accelerated task and readable storage medium
CN112948124B (en) * 2021-03-26 2023-09-22 浪潮电子信息产业股份有限公司 Acceleration task processing method, device, equipment and readable storage medium

Also Published As

Publication number Publication date
CN110838990A (en) 2020-02-25

Similar Documents

Publication Publication Date Title
US11304052B2 (en) Subscription update method, device, and system
US11363102B2 (en) Communication method and apparatus for network accessible only in specific area
US8191073B2 (en) Method and system for polling network controllers
WO2021139832A1 (en) Aperiodic srs sending method and related device
WO2020035043A1 (en) A method and apparatus for layer 1 acceleration in c-ran
US20210235278A1 (en) Virtualized radio access network
Jiao et al. Radio hardware virtualization for coping with dynamic heterogeneous wireless environments
US20220191923A1 (en) Method for Uplink Orthogonal Frequency Division Multiple Random Access and Apparatus
CN110139370B (en) Information indication method, communication device and communication system
US11356347B2 (en) Method and apparatus for monitoring performance of virtualized network functions
CN107634978B (en) Resource scheduling method and device
US20170245269A1 (en) Base station and scheduling method
CN107111662B (en) System, apparatus and method for processing data
JP6415556B2 (en) Method, apparatus, and computer program for allocating computing elements within a data receiving link (computing element allocation within a data receiving link)
JP7107671B2 (en) Resource allocation device
EP3142333A1 (en) Data processing apparatus and data processing method
US10608872B2 (en) Radio interrupt
KR20220088493A (en) Signal transmission and detection method and related device
US20240251301A1 (en) Systems and methods for time distributed prb scheduling per network slice
US20170264678A1 (en) Method and system for clustering distributed objects to use them as if they were one object
CN113424622B (en) Method for determining resource attribute of OFDM symbol and related equipment thereof
JP7177626B2 (en) Scheduling control device and scheduling control method
Jiao and Felipe Augusto Pereira de Figueiredo IDLab, Department of Information Technology, Ghent University-imec, Technologiepark-Zwijnaarde 15, 9052 Ghent, Belgium {xianjun. jiao, ingrid. moerman, wei. liu
US20210064423A1 (en) Hardware Acceleration for Frequency Domain Scheduler in Wireless Networks
JP2021536693A (en) How to allocate minislots, devices and computer-readable media

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19850582

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19850582

Country of ref document: EP

Kind code of ref document: A1