CN104123185A - Resource scheduling method, device and system - Google Patents

Resource scheduling method, device and system Download PDF

Info

Publication number
CN104123185A
CN104123185A CN201310157089.8A CN201310157089A CN104123185A CN 104123185 A CN104123185 A CN 104123185A CN 201310157089 A CN201310157089 A CN 201310157089A CN 104123185 A CN104123185 A CN 104123185A
Authority
CN
China
Prior art keywords
service
scheduling unit
sub
resources
environment
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201310157089.8A
Other languages
Chinese (zh)
Inventor
段然
陈奎林
黄金日
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Mobile Communications Group Co Ltd
Original Assignee
China Mobile Communications Group Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Mobile Communications Group Co Ltd filed Critical China Mobile Communications Group Co Ltd
Priority to CN201310157089.8A priority Critical patent/CN104123185A/en
Publication of CN104123185A publication Critical patent/CN104123185A/en
Pending legal-status Critical Current

Links

Landscapes

  • Mobile Radio Communication Systems (AREA)

Abstract

The invention discloses a resource scheduling method. The method includes the steps that a service scheduling unit and an environment scheduling unit are set; a service to be executed is divided into one or more sub-services, in the process of executing the sub-services of the service, when corresponding resources need to be used, the service scheduling unit informs the environment scheduling unit to allocate the needed resources. The invention further discloses a resource scheduling device and system. By means of the resource scheduling method, device and system, when real-time signals are processed, the resources can be effectively and dynamically scheduled.

Description

Resource scheduling method, device and system
Technical Field
The present invention relates to a technology for processing real-time signals by using General Purpose Processors (GPP), and in particular, to a resource scheduling method, apparatus, and system based on GPP.
Background
With the rise of energy and electricity prices in recent years, global mobile communication network operators face increasingly severe cost pressure, and the difficulty of acquiring station sites and machine rooms is increasing. Since most of the mainstream operators around the world usually own 2-3 networks with different communication systems at the same time, in order to ensure the service quality of the network, a large number of base stations need to be deployed to solve the problem of network coverage. The contradiction between the relative scarcity of site and machine room resources and the increasing number of base stations cannot be coordinated within a certain period, and the contradiction becomes a difficult problem which cannot be avoided by operators at present. Today, the intense competition in the telecommunications market has made the Average Revenue Per User (ARPU) grow slowly and even decline, severely diminishing the profitability of mobile operators. The income reduction of the operators can lead to the compression of the investment of network construction and equipment purchase, thereby influencing the overall development of the whole industry. In view of such situation, due to the continuous profit and long-term development of the industry, the mobile communication industry has proposed a new green evolution wireless network (C-RAN) system architecture to guide the development of the future centralized baseband processing network architecture technology.
The C-RAN system mainly comprises three major parts, namely a distributed wireless network consisting of a remote radio frequency unit (RRU) and an antenna, an optical transmission network with high bandwidth and low delay characteristics, and a centralized baseband processing pool. The centralized baseband processing pool is composed of a plurality of baseband units (BBUs) concentrated at one physical site, and all the BBUs and the RRUs are connected through an optical transmission network with high bandwidth and low delay characteristics; and, cross-connect between a plurality of BBUs in the centralized baseband processing pool. Implementing the functionality of the centralized baseband processing pool requires applying base station virtualization techniques, in particular, virtual allocation and combination of physical resources and computing power to support the centralized baseband processing pool.
The method can be regarded as carrying out centralized integrated processing on the existing BBU, thus effectively realizing carrier load balancing and disaster recovery backup, and simultaneously realizing the purposes of improving the utilization rate of equipment, reducing the number of machine rooms of a base station and reducing energy consumption.
In the existing technical solution for implementing the function of the centralized baseband Processing pool, a traditional architecture of a Digital Signal Processor (DSP), a Field Programmable Gate Array (FPGA) is mostly adopted to process baseband signals, and extra additional switching equipment is adopted to implement In-phase/Quadrature-phase (I/Q, In-phase/Quadrature-phase) data exchange between multiple BBUs. However, a hardware architecture using a DSP as a baseband processing core inevitably has corresponding software architecture characteristics, and specifically, since an operating system of the DSP is relatively simple, when a plurality of cores in the DSP cooperatively process signals, the operating system is difficult to automatically allocate processing resources, so the software architecture design of the DSP is often based on a fixed timing diagram. Thus, when the system configuration changes, such as: when the Time Division-Synchronous Code Division multiple access (TD-SCDMA) soft upgrade is performed to the Time Division duplex-long term Evolution (TDD-LTE), the software configuration of the off-line modified DSP often needs to be restarted. Moreover, when the software configuration combination of the DSP is complex, the fixed time sequence design may bring extremely high complexity to the software design, so the cost may be greatly increased for the upgrade and maintenance of the DSP software.
In addition, when a plurality of DSPs cooperate with each other to complete signal processing, signal interaction between the DSPs and signal interaction between cores in the DSPs are performed in different manners. In this case, the maintenance of the bottom layer information interaction is relatively complicated because the signal transmission delay and the normality of the working of the DSP of the other party need to be additionally considered during the cooperative processing.
In order to solve the defects brought by the DSP + FPGA architecture, a technical scheme for realizing the function of a centralized baseband processing pool is provided, which is based on GPP and realizes the unified opening of multiple standards through a software radio technology, and the adopted GPP has the advantages that: firstly, the GPP has the advantage of good backward compatibility, and thus is beneficial to smooth evolution of a network system; secondly, in the GPP, there are mature modes and methods for communication among multiple cores, and the operating system is also suitable for task allocation in an automated manner, and the requirements for a programming model are relatively simple.
However, at present, when the centralized baseband processing pool function is implemented, the following steps are performed: when processing real-time digital signals, the traditional processing mode of automatically allocating core resources based on an operating system affects the processing efficiency.
Disclosure of Invention
In view of the above, the main objective of the present invention is to provide a method, an apparatus and a system for scheduling resources, which can effectively implement dynamic scheduling of resources when processing real-time signals.
In order to achieve the purpose, the technical scheme of the invention is realized as follows:
the invention provides a resource scheduling method, which is provided with a service scheduling unit and an environment scheduling unit; the method further comprises the following steps:
dividing the service to be executed into more than one sub-service, and in the process of executing each sub-service of the service, when the corresponding resource is needed to be used, the service scheduling unit informs the environment scheduling unit to allocate the needed resource.
In the above scheme, the dividing the service to be executed into more than one sub-services includes:
dividing the service into more than one sub-service capable of operating independently;
determining parameter information required by each sub-service;
and determining the association relation among the sub-services.
In the above scheme, the dividing the service into more than one sub-services capable of operating independently is:
and dividing the service into more than one sub-service capable of operating independently according to a strategy that the coupling correlation between the divided sub-services is small.
In the above solution, before executing each sub-service of the service, the method further includes:
determining a trigger mechanism of an environment scheduling unit according to the starting condition and the processing time delay of each sub-service;
correspondingly, the service scheduling unit informs the environment scheduling unit of allocating the required resource according to the determined trigger mechanism of the environment scheduling unit.
In the above scheme, the method further comprises:
in the process of executing each sub-service of the service, the service scheduling unit performs interface adaptation with the algorithm module of the module resource to complete the processing of each sub-service.
In the foregoing solution, when allocating the required resource, the method further includes:
and the environment scheduling unit allocates the required resources according to the use condition of the resources and the use time slice of the required resources.
The invention also provides a resource scheduling device, which comprises: a service scheduling unit and an environment scheduling unit; wherein,
a service scheduling unit, configured to notify an environment scheduling unit when a corresponding resource needs to be used in each sub-service process of executing a service divided into more than one sub-service;
and the environment scheduling unit is used for distributing the required resources after receiving the notice of the service scheduling unit.
In the foregoing solution, the environment scheduling unit is specifically configured to: and when the required resources are distributed, distributing the required resources according to the use condition of the resources and the use time slices of the required resources.
The invention also provides a resource scheduling system, which comprises: an application subsystem, platform resources, hardware resources, underlying resources, and module resources; the application subsystem further comprises a resource scheduling means, the resource scheduling means comprising: a service scheduling unit and an environment scheduling unit; wherein,
a service scheduling unit, configured to notify an environment scheduling unit when a corresponding resource needs to be used in each sub-service process of executing a service divided into more than one sub-service;
and the environment scheduling unit is used for distributing the required resources after receiving the notice of the service scheduling unit.
In the above solution, the platform resource, the hardware resource, and the bottom layer resource constitute a resource of a resource scheduling system, which is used for being controlled by the environment scheduling unit.
In the foregoing solution, the environment scheduling unit is specifically configured to: and when the required resources are distributed, distributing the required resources according to the use condition of the resources and the use time slices of the required resources.
In the above solution, the module resource further includes an algorithm module; the service scheduling unit is further configured to perform interface adaptation with the algorithm module during execution of each sub-service of the service, so as to complete processing of each sub-service.
The resource scheduling method, the device and the system provided by the invention are provided with a service scheduling unit and an environment scheduling unit; the service to be executed is divided into more than one sub-service, in the process of executing each sub-service of the service, when the corresponding resource is needed to be used, the service scheduling unit informs the environment scheduling unit to allocate the needed resource, the service scheduling and the resource scheduling are separated, and the environment scheduling unit allocates the resource uniformly, so that the dynamic scheduling of the resource can be effectively realized when the real-time signal is processed. And, because the service scheduling and the resource scheduling are separated, the resources can be maximally shared.
In addition, the service is divided into more than one sub-service capable of running independently according to the strategy that the coupling correlation between the divided sub-services is small, so that the environment scheduling unit can schedule resources more flexibly.
In the invention, in the process of executing each sub-service of the service, the service scheduling unit performs interface adaptation with the algorithm module of the module resource to complete the processing of each sub-service, the service scheduling unit does not care about a communication mechanism and does not care about the mutual processing time sequence relation among the processing resources, and only calls the algorithm module, namely, only completes the adaptation of the algorithm interface of a certain air interface system and the optimization of signal processing, thus improving the reusability of codes, simplifying software upgrading and being beneficial to the smooth evolution of a system.
In the invention, in the process of executing each sub-service of the service, the environment scheduling unit maintains the task queue of each processing resource in real time according to the determined maintenance mechanism, thus maximally ensuring the recoverability of the system abnormity and having higher reliability.
In the invention, the environment scheduling unit releases unused processing resources in the process of executing each sub-service of the service, thus effectively saving energy.
In the invention, the environment scheduling unit allocates the required resources according to the use condition of the resources and the use time slice of the required resources, thereby reducing the extra overhead of task scheduling of the operating system of the GPP platform, and effectively improving the processing efficiency.
Drawings
FIG. 1 is a schematic diagram of a system architecture for implementing a centralized baseband processing pool function based on GPP;
FIG. 2 is a flowchart illustrating a resource scheduling method according to the present invention;
fig. 3 is a schematic diagram of a sub-service divided by a PUSCH according to an embodiment of the present invention;
FIG. 4 is a diagram illustrating dynamic sharing of processing resources among processing cores according to a second embodiment of the present invention;
FIG. 5 is a schematic diagram of a resource scheduling apparatus according to the present invention;
FIG. 6 is a schematic diagram of a resource scheduling system according to the present invention.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings and specific embodiments.
Fig. 1 is a schematic diagram of a system architecture for implementing a centralized baseband processing pool function based on GPP, and as shown in fig. 1, a complete system for implementing a centralized baseband processing pool function based on GPP includes: an application subsystem, platform resources, hardware resources, underlying resources, and module resources; the platform resources are GPP platforms, and the platform resources and the hardware resources jointly form software and hardware development platforms on which the whole system is built, and the resources of this part change with the difference of the GPP platforms used by the system, such as: different CPUs, etc.; the bottom layer resources including the operating system encapsulate all hardware resources and most software resources of the software and hardware development platform; the application subsystem and the module resource are main components of the whole system, the application subsystem can be changed according to different platform resources and system requirements, various different air interface standards can be mixed, and the module resource can be smoothly transplanted among different platform resources. In fig. 1, in the same resource hierarchy, a resource unit located at the top has a dependency relationship with a resource unit located at the bottom, specifically, an application subsystem has a dependency relationship with a module resource, the application subsystem also has a dependency relationship with a bottom resource, the module resource has a dependency relationship with the bottom resource, and the bottom resource has a dependency relationship with a hardware resource. Because the GPP needs higher processing efficiency when processing the real-time signal, the processing efficiency is affected by the traditional processing mode of automatically allocating the core resource based on the operating system, and based on this, the resource scheduling method of the present invention, as shown in fig. 2, includes the following steps:
step 200: setting a service scheduling unit and an environment scheduling unit;
here, in practical application, the specific implementation of this step means that the scheduling logic for implementing scheduling in the whole real-time signal processing software is divided into a service scheduling logic and an environment scheduling logic.
Step 201: dividing the service to be executed into more than one sub-service, and in the process of executing each sub-service of the service, when the corresponding resource is needed to be used, the service scheduling unit informs the environment scheduling unit to allocate the needed resource.
Here, the dividing of the service to be executed into more than one sub-service specifically includes:
dividing the service into more than one sub-service capable of operating independently;
determining parameter information required by each sub-service;
and determining the association relation among the sub-services.
The dividing of the service into more than one sub-service capable of operating independently includes:
and dividing the service into more than one sub-service capable of operating independently according to a strategy that the coupling correlation between the divided sub-services is small. Further, in the dividing, the policy according to may further include: single function and moderate processing complexity.
The parameter information includes: configuration information and input/output interface information.
Before executing each sub-service of the service, the method may further include:
determining a trigger mechanism of an environment scheduling unit according to the starting condition and the processing time delay of each sub-service;
correspondingly, the service scheduling unit informs the environment scheduling unit of allocating the required resource according to the determined trigger mechanism of the environment scheduling unit.
The method may further comprise:
in the process of executing each sub-service of the service, the service scheduling unit performs interface adaptation with the algorithm module of the module resource to complete the processing of each sub-service.
Here, the interface adaptation between the service scheduling unit and the algorithm module of the module resource means: and the service scheduling unit completes the adaptation of an algorithm interface of a certain air interface standard.
In allocating the required resources, the method may further include:
and the environment scheduling unit allocates the required resources according to the use condition of the resources and the use time slice of the required resources.
The method may further comprise:
and in the process of executing each sub-service of the service, the environment scheduling unit maintains the task queue of each processing resource in real time according to the determined maintenance mechanism.
Wherein, the processing resource refers to a resource used by each sub-service.
The real-time maintenance of the task queue of each processing resource by the environment scheduling unit comprises the following steps:
when the time for executing each sub-service is overtime or the communication fault is abnormal in each processing resource, the environment scheduling unit processes the sub-services; here, how to handle the specific implementation of the execution time timeout or the communication failure occurring in each processing resource by the environment scheduling unit is a technical means commonly used by those skilled in the art, and is not described in detail herein.
The method may further comprise:
and in the process of executing each sub-service of the service, the environment scheduling unit releases unused processing resources. Such as: performing frequency reduction energy-saving treatment on unused processing cores; the specific implementation of releasing unused processing resources is familiar to those skilled in the art, and will not be described further herein.
The present invention will be described in further detail with reference to examples.
Example one
In this embodiment, a Long Term Evolution (LTE) system is taken as an example to describe a preparation work for implementing the resource scheduling method of the present invention, and the preparation work mainly includes the following steps:
step a: dividing the service into more than one sub-service capable of operating independently;
in this embodiment, a processing flow of processing a physical layer signal by a baseband is divided into more than one sub-service capable of operating independently;
here, in the division, the criteria are that the coupling correlation between the divided sub-services is small, the function is single, and the processing complexity is appropriate.
Step b: determining parameter information required by each sub-service, and respectively packaging each divided sub-service into service scheduling instances capable of independently operating;
here, the parameter information includes: configuration information and input/output interface information.
Step c: determining the incidence relation among all service scheduling instances;
specifically, a processing flow diagram of data among the service scheduling instances and a mutual cooperation relationship among the service scheduling instances are drawn.
Step d: determining a triggering mechanism of the environment scheduling unit according to the condition for triggering and starting each service scheduling instance and the processing time delay of each service scheduling instance;
here, for example, according to the condition for triggering and starting each service scheduling instance and the processing delay of each service scheduling instance, the triggering mechanism for determining the environment scheduling unit is as follows: and when the condition for triggering and starting a certain service scheduling instance is met, triggering the environment scheduling unit.
Step e: determining an environment scheduling implementation and maintenance mechanism of an environment scheduling unit;
here, the environment scheduling implementation mechanism of the determined environment scheduling unit is: and the environment scheduling unit judges a target core issued by the currently processed sub-service according to the service scheduling example currently executed by each processing core and the service scheduling example to be executed by each processing core, and allocates the currently processed sub-service to the core which can complete the related processing in time for execution.
The determined maintenance mechanism is: and maintaining the task queues of the processing cores in real time, and processing the abnormal processing such as time slice overtime or communication fault and the like of the issued service scheduling instances, thereby ensuring the robustness of the environment scheduling unit.
Step f: determining an energy-saving mechanism of an environment scheduling unit;
specifically, a part of unused processing cores are subjected to frequency-reducing energy-saving processing.
Example two
In order to better illustrate the preparation work for implementing the resource scheduling method of the present invention in the first embodiment, the present embodiment takes an uplink processing flow of LTE as an example for description; the uplink processing flow of LTE is a main part of the physical layer processing of an evolved node B (eNB).
First, the division of the sub-services and the determination of the scheduling trigger mechanism of each sub-service are described.
The uplink processing flow of the LTE may include: front-end processing, Physical Random Access Channel (PRACH) processing, SRS processing, Channel estimation, Symbol (Symbol) processing, and Bit (Bit) processing. The front-end processing and the PRACH processing may be considered to be independent of a User Equipment (UE) configuration, while SRS processing, channel estimation, Symbol processing, and Bit processing are directly related to the UE configuration, and the number of UE scheduling determines processing loads of these processing modules. Therefore, when dividing the sub-services, the sub-services can be divided by taking the processed channel objects as granularity, and each type of different channels can further divide the sub-services. The following describes the way of the Channel division molecule service in detail by taking a Physical Uplink Shared Channel (PUSCH) as an example. Fig. 3 is a schematic diagram of a sub-service divided from a PUSCH; as shown in fig. 3, the divided sub-services include: front-end sampling level processing sub-traffic, Symbol level processing sub-traffic, and Bit level processing sub-traffic. The processing of the front-end sampling level processing sub-service comprises operations such as down-sampling and Fourier Transform (FFT), and the like, and the operation is irrelevant to the UE configuration of the current subframe, so that the equalization can be performed according to the antennas, the down-sampling is processed by a single task, and each FFT module processes 2 antennas. The data to be processed of the sub-service processed by the front-end sampling stage is antenna port data, the processed data is a result after FFT operation, and the scheduling trigger mechanism is as follows: single Orthogonal Frequency Division Multiplexing (OFDM) symbols are received in full. For the PUSCH, the processing of Symbol level processing sub-services mainly includes: channel estimation, demodulation, Inverse Discrete Fourier Transform (IDFT), frequency domain equalization, etc., wherein the processing target is a plurality of sets of Resource Block (RB) configured by the UE; the processing complexity of Symbol level processing sub-traffic is approximately linearly related to the RB block size and the number of blocks processed. Therefore, the RB group can be divided into the granularity; data to be processed of the Symbol-level processing sub-service is an RB resource block allocated to a single UE, and the processed data is a demodulated result of the single UE data. The scheduling trigger mechanism is as follows: the multiple OFDM symbols of a single subframe are processed and include scheduling configuration information for each UE in the current subframe. The processing of the Bit-level operation sub-service comprises the following steps: operations such as descrambling, rate de-matching, channel decoding, etc., and the processing complexity is directly related to the size of the coding block and the number of the coding blocks. Since the processing delay of each UE can be dynamically predicted according to the size of the transport Block carried by the UE and the Signal-to-Noise Ratio (SNR) of the current channel, the sub-service is designed with the granularity of a coding Block (CB, Code Block) as a processing unit. The data to be processed of the sub-service processed at the Bit level is the demodulated information of a single coding block of a single UE, and the processed data is the decoded result.
Estimating the processing delay of each sub-service, wherein the processing delay of each sub-service is as follows:
table 1 processing delay for front-end sampling stage processing of sub-services
PUSCH(us) 2 aerial 4 aerial
Front-end sample level processing 94 191
Table 2 processing delay of Symbol level processing sub-service
Symbol level processing (us) 20RB 40RB 80RB 100RB
Channel estimation 18 34.8 58 71
Frequency domain equalization processing 10.5 18 34 42
IDFT 6.7 12.1 21.4 26.7
Demodulation 4.1 9 16 19
Total of 39.3 73.9 129.4 158.8
TABLE 3 processing delay for Bit level processing of sub-services
Bit level processing (us) 5504bits 6144bits
Turbo code decoding 132(8Iter) 150(8Iter)
HARQ combining 11.4 12
Total 143.4 162
Secondly, the setting and the function of the environment scheduling unit are described, which specifically includes: time sequence relation arrangement, interface data transmission and the like.
In practical application, the environment scheduling logic corresponding to the set environment scheduling unit may be bound to the main core, and is responsible for functions of querying a service processing state and allocating a processing task in environment scheduling, and other cores mainly process each sub-service of the service. During setting, the environment scheduling unit can schedule the sub-services according to key information of each sub-service, such as: the number of processing antennas, the size of the target frequency domain data RB, the size of the CB block, the SNR, etc. can estimate the processing delay of each sub-service, and thus can be reasonably allocated to the core processing for processing the sub-services.
The method of using the separation of the environment scheduling and the service scheduling can be arranged everywhereAs shown in fig. 4, assuming that the environment scheduling logic is loaded on the main core, called a scheduling core, which is an environment scheduling unit, the environment scheduling unit has functions of querying a sub-service processing state and allocating and processing sub-services in environment scheduling, and the other two cores are responsible for processing each sub-service of the service, called a service core 1 and a service core 2. After a subframe begins, in the process of executing the front-end sampling level processing sub-service, after an environment scheduling unit, namely environment scheduling logic carried on a main core receives an instruction of a service scheduling unit, a service core 1 is allocated to process the front-end sampling level processing sub-service of an antenna 0(Ant0) and an antenna 1(Ant1), a service core 2 is allocated to process the front-end sampling level processing sub-service of an antenna 2(Ant2) and an antenna 4(Ant4), the processing time delay is 94us, and after the processing of the service core 1 and the service core 2 is completed, the environment scheduling unit is returned with the instruction that the front-end sampling level processing sub-service is completed; in the process of executing the Symbol level processing sub-service, after the environment scheduling unit receives the instruction of the service scheduling unit, the service core 1 is allocated to process 20RB of UE1 and 20RB of UE3, the service core 2 is allocated to process 40RB of UE2, the processing delay is 72us, and after the processing of the service core 1 and the service core 2 is completed, the instruction of completing the Symbol level processing sub-service is returned to the environment scheduling unit. In the process of executing the Bit-level sub-service, after the environment scheduling unit receives the instruction of the service scheduling unit, the service core 1 is allocated to process the relevant CB of the UE1, the service core 2 is allocated to process the relevant CB of the UE3, and simultaneously, the service core 1 and the service core 2 are allocated to process part of the relevant CB of the UE2 respectively, in other words, the service core 1 and the service core 2 process the relevant CB of the UE2 together, the processing time delay is 800us, and after the processing of the service core 1 and the service core 2 is completed, the indication of completing the Bit-level sub-service processing is returned to the environment scheduling unit, so that the processing of each sub-service is realized. Wherein, in figure 4,illustrating the processing associated with the UE1,illustrating the processing associated with the UE2,representing processing associated with the UE 3.
As can be derived from the above description, based on the method of the present invention, in the system for implementing the centralized baseband processing pool function based on GPP shown in fig. 1, the scheduling module includes a service scheduling unit and an environment scheduling unit; the functions of the scheduling module mainly include: resource allocation and interface adaptation; from the perspective of resource allocation and digital processing, the processing of the service scheduling unit is only related to a specific air interface system, and does not relate to the allocation of the processing tasks of the processing cores, and the environment scheduling unit is responsible for allocating the processing tasks of the processing cores to meet the real-time requirement specified by an air interface protocol. After the division, the service scheduling unit is more focused on the algorithm interface adaptation and processing flow of a certain air interface system, so that a specific server is separated, the environment scheduling unit needs to consider the processing time, the allocation principle and the like of each sub-service scheduling processing flow, and the environment scheduling unit needs to consider the dominant frequency of the platform resource, the processor core, the selected operating system and the like during processing. The server refers to a server where the service scheduling unit is located, that is, the server is a hardware device that carries service scheduling logic corresponding to the service scheduling unit.
Specifically, the service scheduling unit interfaces with the module resource, and performs the main functions including: and adapting an algorithm module interface, and applying for resource allocation to an environment scheduling unit for use by the algorithm module. The traffic scheduling unit does not allow direct use of hardware resources nor any memory space outside the stack. When the service scheduling unit needs to use hardware resources and/or memory space, it needs to be allocated by the environment scheduling unit and can be used by the environment scheduling unit.
Therefore, in practical application, the service scheduling logic corresponding to the service scheduling unit can be used on any GPP platform and operating system, and is only bound to the implemented upper layer application. Different air interface protocols such as Global System for Mobile communications (GSM), TD-SCDMA, TDD-LTE, etc. may have different service scheduling logics, which may be mixedly executed in the same service environment, and when a new processor or operating System appears, the service scheduling logics may be transplanted to a new GPP platform in the form of lib library without modification.
Specifically, the functions that the service scheduling unit may perform include: event management, configuration management, traffic data statistics, and data anomaly protection. Wherein the event management comprises: creation and release of algorithm instances, initiation and termination of events such as measurement events, invocation of algorithm instances, and the like; the configuration management comprises: distributing the configuration information of each algorithm instance under different signaling or background configuration and managing the configuration of the executed service; the service data statistics comprise: statistics of service data to be reported, such as flow and the like, and secondary statistics of certain service data to be subjected to secondary statistics and the like; the data exception protection includes exception handling of various data exceptions, such as data mismatch.
The environment scheduling unit interfaces with all resources required for completing the service, and the main functions completed by the environment scheduling unit comprise: platform resources, interface adaptation of underlying resources and hardware resources, and allocation and management of resources, including time slices. The environment scheduling unit does not concern the specific application of the upper layer service, and only concerns the use condition of the resource. Due to this characteristic, the environment scheduling unit needs to control all resources, and therefore a special Application Programming Interface (API) needs to be provided for upper layer applications, i.e. the service scheduling unit and the algorithm module, so that the service scheduling unit and/or the algorithm module can apply for resources including external accelerators and time slices when needed, and the environment scheduling unit allocates these resources at the same time.
In principle, the function of the environment scheduling unit is equivalent to manually implementing part of the functions of the operating system in terms of service management, which is mainly based on the consideration of the special real-time requirements of the radio protocol physical layer on the algorithm processing, such as: the special requirements of task switching, single-core single-thread binding and the like are reduced as much as possible.
Specifically, the functions that the environment scheduling unit may perform include: message management, timing management, task management, communication management, timing and communication anomaly protection, and running state statistics. Wherein the message management comprises managing a message queue to drive execution of a sub-service; the timing management includes: applying the distributed time slices to the use time sequence of various resources and the service scheduling unit; the task management comprises the following steps: configuring and managing sub-services, interrupts, semaphores, mailboxes and the like related to an operating system; the communication management includes: configuration and management of various communications among cores inside the CPU, between CPUs, and across servers; the timing and communication anomaly protection comprises: abnormal protection of various time sequences and communications, such as exceeding of execution time slices of sub-services, loss of data transmission and the like; the running state statistics include: and counting various data of CPU operation, specifically including message triggering times, success times, failure times and the like, so as to facilitate debugging requirements.
In summary, the resource scheduling method provided by the present invention separates the service scheduling from the resource scheduling, and the environment scheduling unit allocates the resources uniformly, so that the dynamic scheduling of the resources can be effectively realized when the real-time signal is processed. Moreover, since service scheduling and resource scheduling are separated, the environment scheduling logic may not be limited to the same air interface protocol, for example, services of LTE and TD-SCDMA may be executed in the same service environment in a mixed manner, in other words, in the service processing process of LTE and TD-SCDMA, when the amount of users borne by LTE or TD-SCDMA is insufficient, the environment scheduling logic may schedule resources to the services with large amount of users for use.
In addition, when dividing the sub-services, the services are divided into one upper sub-service which can independently run according to a strategy that the coupling correlation between the divided sub-services is small, so that the environment scheduling unit can more flexibly schedule resources.
In addition, in the process of executing each sub-service of the service, the environment scheduling unit maintains the task queue of each processing resource in real time according to the determined maintenance mechanism, the service scheduling unit does not care about the communication mechanism and the mutual processing time sequence relation among the processing cores, and only calls the algorithm module, namely, only completes the adaptation of the algorithm interface of a certain air interface system and the optimization of signal processing, so that the reusability of codes is improved, the software upgrading is simplified, and the smooth evolution of a system is facilitated.
The environment scheduling unit maintains the task queue of each processing resource in real time, so that the recoverability of system abnormity is guaranteed to the maximum extent, and higher reliability is achieved.
The resource is distributed by the environment scheduling unit, so that the extra overhead of task scheduling of the operating system of the GPP platform is reduced, and the processing efficiency is effectively improved.
In order to implement the above method, the present invention further provides a resource scheduling apparatus, as shown in fig. 5, the apparatus includes a service scheduling unit 51 and an environment scheduling unit 52; wherein,
a service scheduling unit 51, configured to notify the environment scheduling unit 52 when a corresponding resource needs to be used in each sub-service process of executing a service divided into more than one sub-services;
the environment scheduling unit 52 is configured to allocate the required resources after receiving the notification from the service scheduling unit 51.
The environment scheduling unit 52 is specifically configured to: and when the required resources are distributed, distributing the required resources according to the use condition of the resources and the use time slices of the required resources.
The environment scheduling unit 52 is further configured to maintain the task queue of each processing resource in real time according to the determined maintenance mechanism during the process of executing each sub-service of the service.
The environment scheduling unit 52 is further configured to release unused processing resources in the process of executing each sub-service of the service.
Based on the above resource scheduling apparatus, the present invention further provides a resource scheduling system, as shown in fig. 6, the system includes: application subsystem 61, platform resources 62, hardware resources 63, underlying resources 64, and module resources 65; wherein, the application scheme 61 comprises a resource scheduling device 661, which comprises: a traffic scheduling unit 6611, and a context scheduling unit 6612; wherein,
a service scheduling unit 6611, configured to notify an environment scheduling unit 6612 when a corresponding resource needs to be used in each sub-service process of executing a service divided into more than one sub-services;
the environment scheduling unit 6612 is configured to allocate the required resource after receiving the notification from the service scheduling unit.
The functions of the platform resource 62, the hardware resource 63, the bottom layer resource 64, and the module resource 65 are the same as those of the corresponding modules in fig. 1, and are not described herein again. Here, the platform resource 62 is a GPP platform.
The environment scheduling unit 6612 is specifically configured to: and when the required resources are distributed, distributing the required resources according to the use condition of the resources and the use time slices of the required resources.
The environment scheduling unit 6612 is further configured to maintain the task queue of each processing resource in real time according to the determined maintenance mechanism during the execution of each sub-service of the service.
The environment scheduling unit 6612 is further configured to release unused processing resources in the process of executing each sub-service of the service.
The module resource 65 further includes an algorithm module 651, and the service scheduling unit 6611 is further configured to, during execution of each sub-service of the service, perform interface adaptation between the service scheduling unit and the algorithm module 651, and complete processing of each sub-service.
The platform resources 62, the hardware resources 63, and the underlying resources 64 constitute resources of a resource scheduling system for being controlled by the environment scheduling unit 6612.
The above description is only a preferred embodiment of the present invention, and is not intended to limit the scope of the present invention.

Claims (12)

1. A resource scheduling method is characterized in that a service scheduling unit and an environment scheduling unit are arranged; the method further comprises the following steps:
dividing the service to be executed into more than one sub-service, and in the process of executing each sub-service of the service, when the corresponding resource is needed to be used, the service scheduling unit informs the environment scheduling unit to allocate the needed resource.
2. The method of claim 1, wherein dividing the service to be executed into more than one sub-services comprises:
dividing the service into more than one sub-service capable of operating independently;
determining parameter information required by each sub-service;
and determining the association relation among the sub-services.
3. The method according to claim 2, wherein the dividing the service into more than one sub-services capable of operating independently is:
and dividing the service into more than one sub-service capable of operating independently according to a strategy that the coupling correlation between the divided sub-services is small.
4. A method according to claim 1, 2 or 3, characterized in that before executing each sub-service of said service, the method further comprises:
determining a trigger mechanism of an environment scheduling unit according to the starting condition and the processing time delay of each sub-service;
correspondingly, the service scheduling unit informs the environment scheduling unit of allocating the required resource according to the determined trigger mechanism of the environment scheduling unit.
5. A method according to claim 1, 2 or 3, characterized in that the method further comprises:
in the process of executing each sub-service of the service, the service scheduling unit performs interface adaptation with the algorithm module of the module resource to complete the processing of each sub-service.
6. A method according to claim 1, 2 or 3, wherein in allocating the required resources, the method further comprises:
and the environment scheduling unit allocates the required resources according to the use condition of the resources and the use time slice of the required resources.
7. An apparatus for scheduling resources, the apparatus comprising: a service scheduling unit and an environment scheduling unit; wherein,
a service scheduling unit, configured to notify an environment scheduling unit when a corresponding resource needs to be used in each sub-service process of executing a service divided into more than one sub-service;
and the environment scheduling unit is used for distributing the required resources after receiving the notice of the service scheduling unit.
8. The apparatus according to claim 7, wherein the environment scheduling unit is specifically configured to: and when the required resources are distributed, distributing the required resources according to the use condition of the resources and the use time slices of the required resources.
9. A system for scheduling resources, the system comprising: an application subsystem, platform resources, hardware resources, underlying resources, and module resources; wherein the application subsystem further comprises a resource scheduling device, the resource scheduling device comprising: a service scheduling unit and an environment scheduling unit; wherein,
a service scheduling unit, configured to notify an environment scheduling unit when a corresponding resource needs to be used in each sub-service process of executing a service divided into more than one sub-service;
and the environment scheduling unit is used for distributing the required resources after receiving the notice of the service scheduling unit.
10. The system of claim 9, wherein the platform resources, the hardware resources, and the underlying resources comprise resources of a resource scheduling system for control by the environment scheduling unit.
11. The system according to claim 9 or 10, wherein the environment scheduling unit is specifically configured to: and when the required resources are distributed, distributing the required resources according to the use condition of the resources and the use time slices of the required resources.
12. The system of claim 9 or 10, wherein the module resources further comprise an algorithm module; the service scheduling unit is further configured to perform interface adaptation with the algorithm module during execution of each sub-service of the service, so as to complete processing of each sub-service.
CN201310157089.8A 2013-04-28 2013-04-28 Resource scheduling method, device and system Pending CN104123185A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310157089.8A CN104123185A (en) 2013-04-28 2013-04-28 Resource scheduling method, device and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310157089.8A CN104123185A (en) 2013-04-28 2013-04-28 Resource scheduling method, device and system

Publications (1)

Publication Number Publication Date
CN104123185A true CN104123185A (en) 2014-10-29

Family

ID=51768607

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310157089.8A Pending CN104123185A (en) 2013-04-28 2013-04-28 Resource scheduling method, device and system

Country Status (1)

Country Link
CN (1) CN104123185A (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104680432A (en) * 2015-03-24 2015-06-03 国家电网公司 Method and device for determining service bearing condition of dispatching speciality
CN106776002A (en) * 2016-11-15 2017-05-31 华为技术有限公司 The communication means and device of the virtualization hardware framework of FPGA
CN106804054A (en) * 2015-11-26 2017-06-06 中兴通讯股份有限公司 A kind of method and device for virtualizing the shared transfer resource of base station access network network
CN106815061A (en) * 2015-12-01 2017-06-09 阿里巴巴集团控股有限公司 A kind of method for processing business and device
CN110838990A (en) * 2018-08-17 2020-02-25 上海诺基亚贝尔股份有限公司 Method and device for accelerating layer1 in C-RAN
CN112752304A (en) * 2019-10-31 2021-05-04 上海华为技术有限公司 Method and related device for processing uplink reference signal

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102467415A (en) * 2010-11-03 2012-05-23 大唐移动通信设备有限公司 Service facade task processing method and equipment
CN102541640A (en) * 2011-12-28 2012-07-04 厦门市美亚柏科信息股份有限公司 Cluster GPU (graphic processing unit) resource scheduling system and method
CN102567086A (en) * 2010-12-30 2012-07-11 中国移动通信集团公司 Task scheduling method, equipment and system

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102467415A (en) * 2010-11-03 2012-05-23 大唐移动通信设备有限公司 Service facade task processing method and equipment
CN102567086A (en) * 2010-12-30 2012-07-11 中国移动通信集团公司 Task scheduling method, equipment and system
CN102541640A (en) * 2011-12-28 2012-07-04 厦门市美亚柏科信息股份有限公司 Cluster GPU (graphic processing unit) resource scheduling system and method

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104680432A (en) * 2015-03-24 2015-06-03 国家电网公司 Method and device for determining service bearing condition of dispatching speciality
CN106804054A (en) * 2015-11-26 2017-06-06 中兴通讯股份有限公司 A kind of method and device for virtualizing the shared transfer resource of base station access network network
CN106815061A (en) * 2015-12-01 2017-06-09 阿里巴巴集团控股有限公司 A kind of method for processing business and device
CN106776002A (en) * 2016-11-15 2017-05-31 华为技术有限公司 The communication means and device of the virtualization hardware framework of FPGA
CN106776002B (en) * 2016-11-15 2020-09-25 华为技术有限公司 Communication method and device for virtualized hardware architecture of FPGA
CN110838990A (en) * 2018-08-17 2020-02-25 上海诺基亚贝尔股份有限公司 Method and device for accelerating layer1 in C-RAN
CN112752304A (en) * 2019-10-31 2021-05-04 上海华为技术有限公司 Method and related device for processing uplink reference signal
CN112752304B (en) * 2019-10-31 2022-08-26 上海华为技术有限公司 Method and related device for processing uplink reference signal
US12068995B2 (en) 2019-10-31 2024-08-20 Huawei Technologies Co., Ltd. Method for processing uplink reference signal and related apparatus

Similar Documents

Publication Publication Date Title
CN107333282B (en) 5G terminal universal platform optimization method and system based on GPP
US10762014B2 (en) Method and device for implementing LTE baseband resource pool
CN104123185A (en) Resource scheduling method, device and system
EP3962032B1 (en) Method and apparatus for balancing server load in cloud ran systems
CN103906257B (en) LTE wide-band communication system computing resource schedulers and its dispatching method based on GPP
CN105359574A (en) Cooperative radio access network with centralized base station baseband unit (BBU) processing pool
CN107087303B (en) Base station hardware virtualization method and device and base station
CN102438338B (en) Base station based on multicore general processor for broadband mobile communication system
CN103327630B (en) The method and apparatus of the wireless resource scheduling in wireless network
CN107959981B (en) Communication terminal and communication testing method
CN111818581A (en) User access method and access network equipment
CN116324723A (en) Method and apparatus for managing load of network node
WO2012097630A1 (en) Ring network configuration method and apparatus
CN104954417B (en) Method and apparatus for handling data
JP7177266B2 (en) DATA TRANSMISSION CONTROL METHOD, APPARATUS AND ACCESS NETWORK DEVICE
CN106658608B (en) A kind of method and device configuring transmission rate
CN112105059B (en) Shunting method and shunting control base station
CN102692905B (en) Method for dynamic and static combined dispatch of LTE (long term evolution) physical layer on multicore DSP (digital signal processor)
CN110708710B (en) 5G small base station and slice management module thereof
CN102480790B (en) Carrier wave resource configuration method, apparatus thereof and base station
CN104735704A (en) Carrier wave migration method and device
CN104301932B (en) The task processing method and equipment of a kind of base station resource pond
US10624083B2 (en) Data processing method and base station
CN109348511A (en) A kind of distribution method and device of base-band resource
Ziran et al. Parallel Processing Design for LTE PUSCH Demodulation and Decoding Based on Multi-Core Processor

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20141029

RJ01 Rejection of invention patent application after publication