CN106658520B - Method and system for constructing task processing path - Google Patents

Method and system for constructing task processing path Download PDF

Info

Publication number
CN106658520B
CN106658520B CN201611244893.XA CN201611244893A CN106658520B CN 106658520 B CN106658520 B CN 106658520B CN 201611244893 A CN201611244893 A CN 201611244893A CN 106658520 B CN106658520 B CN 106658520B
Authority
CN
China
Prior art keywords
processing unit
processing
task
destination address
path
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201611244893.XA
Other languages
Chinese (zh)
Other versions
CN106658520A (en
Inventor
田霖
卓蕊潋
周一青
石晶林
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Institute of Computing Technology of CAS
Original Assignee
Institute of Computing Technology of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Institute of Computing Technology of CAS filed Critical Institute of Computing Technology of CAS
Priority to CN201611244893.XA priority Critical patent/CN106658520B/en
Publication of CN106658520A publication Critical patent/CN106658520A/en
Application granted granted Critical
Publication of CN106658520B publication Critical patent/CN106658520B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W16/00Network planning, e.g. coverage or traffic planning tools; Network deployment, e.g. resource partitioning or cells structures
    • H04W16/02Resource partitioning among network components, e.g. reuse partitioning
    • H04W16/10Dynamic resource partitioning
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L61/00Network arrangements, protocols or services for addressing or naming
    • H04L61/50Address allocation

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)
  • Mobile Radio Communication Systems (AREA)

Abstract

The present invention provides a method for constructing task processing paths from multiple processing resource pools. The method comprises the following steps: determining a task processing path based on task requirements, wherein the task processing path comprises processing units selected from the plurality of resource pools; searching a corresponding destination address of the processing unit based on the task processing path; and sending a notification message to the processing unit, wherein the notification message contains a destination address to be sent by the data stream of the processing unit. The method of the invention can efficiently construct the task processing path according to the requirement so as to complete the configuration of the exchange path for transmitting the data stream.

Description

Method and system for constructing task processing path
Technical Field
The present invention relates to the field of communications technologies, and in particular, to a method and a system for constructing a task processing path.
Background
A vertical network architecture model is adopted in a conventional cellular network, and resources of a cell are allocated in a vertically independent resource allocation manner, as shown in fig. 1a, in a base station of GSM, TDSCDMA, and LTE, a protocol process, a baseband process, and a radio frequency process are respectively combined into a cell, and the peak load demand of the cell is usually used as a capacity index of the cell to allocate the resources. Under this architecture, processing resources cannot be shared between different base stations, and the real-time load of each cell has great non-uniformity in time and space. The idle network resources are often unavailable for other base stations, so that the utilization rate of the network resources is low, and the base stations can be densely deployed only by increasing the number of the base stations to meet the increasing demand of mobile data volume.
In order to improve the resource utilization rate and save the operation cost of the mobile network, people begin to research a horizontal shared network architecture, as shown in fig. 1b, in the architecture, a concept of horizontal pooling of resources is adopted, various types of base station resources are formed into a processing resource pool, and the processing resource pool is subjected to uniform resource management and control for flexible allocation. For example, the general architecture of the multimode base station may be adopted to use the resources in GSM, TDSCDMA, and LTE as a unified resource pool, such as forming a radio frequency processing resource pool (radio frequency process 1, radio frequency process 2, and radio frequency process 3), a baseband processing resource pool, and a protocol processing resource pool, so as to form a cell by dynamically combining the radio frequency processing resource, the baseband processing resource, and the protocol processing resource, thereby completing coverage of the target area. By the method, the utilization rate of network resources can be improved, and the existing infrastructure can be reused only by carrying out configuration loading or upgrading customization on the corresponding processing unit, so that the requirements of network upgrading and development are met.
However, in the current horizontal shared network architecture, a mature interconnection network design is generally adopted between the processing units in the resource pool, as shown in fig. 2, the DSP processing units in the baseband processing resource pool and the different chassis are interconnected through serial rapidio (srio), the radio frequency processing resource pool and the baseband processing resource pool are interconnected through a CPRI (common public radio interface) protocol, and the baseband processing resource pool and the protocol processing resource pool are interconnected through an optical fiber or an ethernet. In this example, if a path for task processing (or called a transmission path of a data stream) is from the radio frequency processing unit 2 (RRU 2 for short) to the baseband processing unit 1(BBU1 for short) and then to the protocol processing unit 1(PPU1 for short), the middle needs to pass through the CPRI switch, the sRIO switch, and the ethernet switch. In order to complete the transmission of the data stream, a data exchange path between the processing units needs to be configured. For example, in the prior art, the address of BBU1 is manually configured for RRU2, typically through a configuration interface of RRU 2; through a configuration interface of the BBU1, an address of the PPU1 is configured for the BBU1, then, the RRU2 fills the address of the BBU1 after the data packet is generated as a destination address, and the BBU1 fills the address of the PPU1 after the data packet is generated as a destination address.
Therefore, in the method in the prior art, the connection relationship is configured for each internet network, and the addresses of each processing unit in the task processing path need to be acquired separately. The main problems of the method are as follows: firstly, the acquisition time of the address may not be synchronous, thereby causing the initial establishment error of the task processing path; secondly, because the interconnection and exchange mechanisms of the switches are different, as shown in fig. 2, the exchange mechanism between the RRU2 and the BBU1 is CPRI exchange, the exchange mechanism between the BBU1 and the PPU1 is RapidIO exchange, and different address definitions are adopted, so that the configuration operation is complex and the implementation is difficult; thirdly, when the task processing path is changed, the configuration needs to be manually carried out again.
Disclosure of Invention
The present invention is directed to overcoming the above-mentioned drawbacks of the prior art and providing a method for flexibly and dynamically constructing a task processing path.
According to a first aspect of the present invention, there is provided a method for constructing task processing paths from a plurality of resource pools. The method comprises the following steps:
step 1: determining a task processing path based on task requirements, wherein the task processing path comprises processing units selected from the plurality of resource pools;
step 2: searching a corresponding destination address of the processing unit based on the task processing path;
and step 3: and sending a notification message to the processing unit, wherein the notification message contains a destination address to be sent by the data stream of the processing unit.
In one embodiment, the method further comprises receiving a feedback message from the processing unit.
In one embodiment, the feedback message comprises at least one of: the indication of whether the destination address is successfully configured, the state of the processing unit, the resource utilization rate of the processing unit and the address change of the processing unit.
In one embodiment, the method further comprises changing the task processing path based on at least one of: resource utilization of the processing unit; a state of the processing unit; a network load; a change in task requirements.
In one embodiment, the types of processing units include a processing board, a DSP, a CPU, a memory, a hard disk, a virtual memory.
In one embodiment, the task requirements include one or more of task priority, bandwidth, latency, throughput, number of carriers.
In one embodiment, in step 2, the corresponding destination address of each processing unit is looked up through a pre-configured address table of the processing unit.
According to a second aspect of the present invention, there is provided a system for constructing task processing paths from a plurality of processing resource pools. The system comprises: means for determining a task processing path based on task requirements, wherein the task processing path includes a processing unit selected from the plurality of processing resource pools; means for searching for a corresponding destination address of the processing unit based on the task processing path; means for sending a notification message to the processing unit, wherein the notification message includes a destination address to which a data stream of the processing unit is to be sent.
In one embodiment, the system further comprises means for configuring or generating the task requirements.
In one embodiment, the system further comprises a management agent means for uniformly sending notification messages to said processing units and/or populating destination addresses for data streams of said processing units.
Compared with the prior art, the invention has the advantages that: by uniformly configuring the destination addresses of the processing units, task processing paths can be efficiently constructed as required; the processing unit in the resource pool is dynamically combined, so that the processing of specific functions can be completed, and new functions can be flexibly expanded.
Drawings
Other features, objects, and advantages of the present application will become more apparent upon reading of the following detailed description of non-limiting embodiments thereof, with reference to the accompanying drawings.
FIG. 1a is a schematic diagram illustrating a conventional vertical network architecture
Fig. 1b shows a schematic diagram of a horizontally shared network architecture.
Fig. 2 shows a schematic diagram of resource pools in a base station.
FIG. 3 shows a flow diagram of a method of constructing a task processing path, according to one embodiment of the invention.
FIG. 4 shows a schematic diagram of a system for building task processing paths, according to one embodiment of the invention.
FIG. 5 shows a schematic diagram of a system for building task processing paths, according to another embodiment of the invention.
FIG. 6 illustrates a block diagram of a resource pool suitable for employing the method and system of the present invention.
Detailed Description
The present application will be described in further detail with reference to the following drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the relevant invention and not restrictive of the invention. In addition, it should be noted that, for convenience of description, only the portions related to the related invention are shown in the drawings.
It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict. The present application will be described in detail below with reference to the embodiments with reference to the attached drawings.
According to one embodiment of the invention, a method of constructing a task processing path is provided. Fig. 3 shows the implementation steps of the method.
1) Step S310, selecting a task processing path.
Selecting a task processing path refers to selecting a processing unit with processing capacity meeting task requirements from a resource pool.
In this context, the processing unit refers to a functional unit comprising hardware and having a certain processing capability. For example, the processing units in the resource pool may be hardware units containing a large level, such as a baseband processing board, and the like, or may include a relatively small level of processing units, such as a DSP, and the like. The processing units may include physical resources such as CPUs, DSPs, memory, disks, etc., and may also include virtual resources such as virtual memory, etc.
The task requirements may be configured by a user or determined according to bandwidth, latency, throughput, number of carriers, etc. required by the task. For example, the task requirement may be determined to be one carrier or a maximum transmission rate of 50Mbps, or a combination of both. The invention is not limited as to the type of task requirements.
The task processing path refers to a combination of processing units in the resource pool, and is used for completing transmission of the data stream. The number of processing units in the task processing path may be two or more.
There are various methods for selecting the task processing path. For example, it is possible to simply select a processing unit with a processing capability meeting the task requirement from the resource pool, as shown in fig. 2, if the current task requirement is an LTE carrier (20M, 2 × 2MIMO), and in the radio frequency processing resource pool, the baseband processing resource pool, and the radio frequency processing resource pool, the remaining resources that still exist are the RRU2 (radio frequency processing unit 2), the baseband processing unit 1(BBU1), and the protocol processing unit 1(PPU1), respectively, and it is known through calculation that the RRU2, the BBU1, and the PPU1 can undertake the processing task of the LTE carrier. Therefore, the task processing path selected is: RRU2 → BBU1 → PPU 1. Specifically, if the BBU1 in the baseband processing resource pool is a DSP with model TI TCI6618, and can support a dual-channel 20MHz, 300Mbps downlink and 150Mbps uplink 2x2 Multiple Input Multiple Output (MIMO) solution, selecting the BBU1 can process the baseband part of the current task. The selection of RRU and PPU can be analyzed in a similar way. .
After the task processing path is determined, the task processing path may also be dynamically changed according to the monitored data of the network or the actual state of the processing unit in the resource pool. For example, if the load of the LTE carrier increases and the PPU1 processing capability cannot be satisfied, an appropriate processing unit (e.g., PPU2) may be selected from the protocol resource pool and added to the task processing path. Similarly, if the load of the LTE carrier is reduced, some processing units may be recycled for use with other virtual base stations. By the method, the utilization rate of the processing unit can be effectively improved.
As can be seen from the above, the relationship between the processing units in the task processing path may be one-to-one, one-to-many, or many-to-many. For example, BBU1 may correspond to two processing units PPU1 and PPU 2.
2) Step S320, find the destination address of the data stream of the processing unit in the task processing path.
The purpose of this step is to obtain the next processing unit to which the data stream processed by each processing unit needs to be forwarded according to the connection relationship between the processing units in the task processing path.
In order to achieve the above purpose, an information table reflecting the connection relationship of each processing unit may be stored in advance to obtain the composition of the processing units of the whole resource pool, for example, for each processing unit, the type, the flag, the hardware board, the chassis, the connection condition, etc. of the processing unit (such as the address of the connected switch and the switch) may be saved. Each processing unit may have multiple connection relationships, for example, the baseband processing resource pool in fig. 2 is connected to the radio frequency processing unit through the CPRI switching network and connected to the protocol processing unit through the sRIO switching network.
Table 1 shows the contents of the information table, taking BBU as an example. The connection relation of each processing unit and the address thereof can be acquired through the information table.
Table 1 BBU information table
Figure BDA0001196816940000051
Figure BDA0001196816940000061
Therefore, in this step, according to the composition of the task processing path, the destination address for forwarding the data stream of the relevant processing unit can be obtained by looking up the information table of the processing unit.
3) In step S330, the destination address is notified to the corresponding processing unit.
In this step, a notification message is sent to the processing unit to perform a behavior of notifying the destination address, for example, if the data stream of the RRU2 needs to be forwarded to the BBU1, the address of the BBU1 is the destination address of the data stream on the RRU2, and in order to complete transmission of the data stream, the RRU2 needs to be notified of the address corresponding to the BBU 1.
For example, the notification message may include, but is not limited to, the following:
the information type is as follows: information for indicating that the information is a notification destination address.
And (3) information numbering: is the unique identification of the information and can be generated by the system in a unified way.
The processing unit identification: is a unique identification of the processing unit that needs to be notified, e.g. DSP 1, CPU 1, etc.
Destination address: for identifying the address to which the data stream needs to be sent. The destination address may also include various types, such as an IP address, a MAC address, etc., a DSP number, a memory number, etc., depending on the type of the processing unit. According to different corresponding relations of the processing units, a plurality of destination addresses can be included in one notification message. Further, the address type needs to be indicated in the notification message so that the processing unit can resolve correctly.
4) In step S340, it is determined whether feedback is obtained for all processing units.
In order to know whether the respective processing unit has received the destination address and has successfully completed the related configuration, a feedback message for the destination address notification behavior may be further received from all processing units before starting the task processing. If the destination address configuration is indicated to be successful, step S350 is executed, otherwise, the task processing path may be selected to continue waiting for a period of time, to be retransmitted, or to be considered for updating. By the method, the data stream can be transmitted synchronously after each processing unit is configured successfully, so that the condition of data transmission errors caused by abnormal processing units is avoided.
In one embodiment, the feedback information received from the processing unit includes, but is not limited to, the following:
the information type is as follows: feedback information indicating that the information is a notification destination address.
And (3) information numbering: the information number is the number of the notification destination address information to indicate which piece of notification destination address information corresponds to the feedback information.
Success or failure flag: for feeding back whether the destination address is successfully configured.
The feedback message may further include whether the processing unit is operating normally, resource utilization, address changes, and the like.
In practical applications, the listed items may be contained in one message or may be present in multiple messages respectively. For example, a status report message (status report) is used to indicate the processing unit exception individually.
The reporting of the feedback information may be event-triggered reporting or periodic reporting. For example, when the destination address configuration is successful, the destination address configuration is failed, the processing unit is abnormal, or the load of the processing unit exceeds a threshold value, or the self address of the processing unit is changed, the report is immediately reported, so that whether to update the task processing path can be determined based on the feedback message. And the report of the resource utilization rate can adopt a periodic reporting mode.
In another embodiment, in order to reduce the interaction of information, step S350 may also be directly performed after waiting for a predetermined time threshold after sending the notification information.
5) Step S350, start the task.
Starting the task, i.e. starting the actual data stream transmission, for example, as shown in fig. 2, for the task processing path RRU2 to BBU1 to PPU1, the basic process of data transmission is as follows: data needing to be processed firstly enters the RRU2 to be processed, the processed data are packaged according to a CPRI protocol, the address of the BBU1 is filled in the destination address, and the data are sent to a CPRI switch; the CPRI switch forwards the data packet to the BBU1 according to the destination address of the data packet; after receiving the data, the BBU1 performs baseband processing, encapsulates the data according to a RapidIO protocol after the processing is finished, fills the address of the PPU1 in the destination address, and sends the address to the sRI switch; the sRI switch forwards the data packet to the PPU1 according to the destination address of the data packet; upon receipt, PPU1 may proceed with protocol processing to complete the task.
As can be seen from the above, the method for constructing a task processing path of the present invention can collectively notify the relevant processing units of the destination address to which the processing units forward after completing the data stream. The method can flexibly and effectively realize the establishment and the update of the task processing path.
There is also provided, in accordance with an embodiment of the present invention, a system adapted to perform the method of constructing a task processing path of the present invention. As shown in fig. 4, the system includes a configuration layer, a control layer, and a physical resource layer (i.e., a resource pool). The configuration layer carries out information interaction with the control layer through the configuration interface, and the control layer carries out information interaction with the resource pool through the control interface. The configuration layer and the control layer are only logical function modules, and may be implemented in the management and control center shown in fig. 4, or may be independent of the management and control center.
In this embodiment, the role of the configuration layer is to pass the task requirements for the control layer. For example, the task requirements of the configuration layer may be defined by a user. The configuration layer may also provide an interface to receive task processing requirements from other external modules.
The role of the control layer includes, but is not limited to: allocating addresses to certain processing units; the composition of the processing unit of the whole resource pool is stored; forming a task processing path according to the task requirement transmitted by the configuration layer, the actual situation of the physical resource layer and/or the resource management strategy of the management and control center; performing information interaction with the processing unit to configure the physical resource of the bottom layer and generate an exchange path matched with the task processing path; status information of the processing unit is acquired, and the like.
And controlling information interaction between the layers and the processing units, wherein the information interaction comprises sending a notification message informing the processing units of the destination address of the processing units and receiving feedback information from the processing units.
The control layer may also be responsible for the allocation of destination addresses of certain processing units in the resource pool, e.g. for ethernet switches, the MAC addresses of the connected devices are device-owned and no additional allocation is required. For SRIO switches, for example, the addresses of connected devices need to be assigned. After the control layer assigns addresses to some processing units, the corresponding switches and processing units are notified so that the switches complete the correspondence of the processing unit addresses to the ports. As long as both parties performing information interaction can uniquely identify the target processing unit according to the destination address, the allocated destination address may be in various forms, for example, for the DSP address, the destination address may be identified by a simple uniform number, or uniquely determined by a rack number to which the DSP belongs + a board number + a local DSP number.
The control interface between the control plane and the processing unit is a logical interface, which may also exist between the control plane and the switching network. The information interaction between the control layer and the processing units may be directly communicated with each processing unit by the control layer, or the control layer may notify the management agent first, and then the management agent further notifies the processing units, and fig. 5 illustrates an example of the management agent.
In another embodiment, a management agent may also exist within each resource pool, or within each switching network. For example, a management agent exists in the CPRI switching network, and the control layer notifies the management agent in the CPRI switching network of the address of the BBU1, and the management agent stores the address. The data processed by the RRU2 enters the CPRI switching network, and a management agent inside the CPRI switching network fills the destination address of the data stream of the RRU2 with the address of the BBU 1; the switching entity in the switching network forwards it to BBU1 according to the destination address. By the way of filling the destination address of the data stream by the management agent in a centralized way, information interaction between the control layer and each processing unit in the resource pool can be avoided, and the processing units can be managed in a centralized way.
As can be seen from the above, in the system according to the present invention, the functions of the configuration layer and the control layer are relatively single and independent, so that the degree of coupling between modules and the complexity of implementation can be reduced. The control layer informs the processing unit of the destination address of the data stream forwarding, so that a task processing path can be flexibly and dynamically constructed or changed, and the problem of asynchronous time for the processing unit to acquire the destination address is avoided. In addition, the control layer can acquire the states (normal, abnormal, resource utilization rate, etc.) of the respective processing units in time by receiving the feedback information from the processing units, so as to manage the resource pool intensively.
For clarity, in the present application, the method and system of the present invention are mainly described by taking a base station processing resource as an example, and it should be understood by those skilled in the art that the method and system for constructing a task processing path described above may also be applied to other technical fields, for example, for the example of the resource pool shown in fig. 6, the resource pool includes a first processing resource pool, a second processing resource pool, and a third processing resource pool, and the processing resource pools are connected by using a switching matrix or a switching network, in which case, the method and system according to the present invention may be applied to construct a task processing path, and the problem of time asynchronization caused when a data stream transmission path is conventionally constructed by using a switching protocol may also be avoided.
Although some specific embodiments of the present invention have been described in detail by way of examples, it should be understood by those skilled in the art that the above examples are for illustrative purposes only and are not intended to limit the scope of the present invention. It will be appreciated by those skilled in the art that modifications may be made to the above embodiments without departing from the scope and spirit of the invention. The scope of the invention is defined by the appended claims.

Claims (10)

1. A method for constructing a task processing path from a plurality of processing resource pools, wherein the plurality of processing resource pools are connected to each other by a switching matrix or a switching network, the method comprising:
step 1: determining a task processing path based on task requirements, wherein the task processing path comprises a processing unit selected from the multiple processing resource pools, and the processing unit is a functional unit which comprises hardware and has certain processing capacity;
step 2: searching a corresponding destination address of the processing unit based on the task processing path;
and step 3: and sending a notification message to the processing unit, wherein the notification message includes a destination address to be sent by the data stream of the processing unit, and the corresponding destination address of the processing unit and the destination address to be sent by the data stream of the processing unit both refer to an address of a next processing unit in the task processing path to which the data stream of the processing unit needs to be sent after the processing unit completes its own processing task.
2. The method of claim 1, further comprising: receiving a feedback message from the processing unit.
3. The method of claim 2, wherein the feedback message comprises at least one of: the indication of whether the destination address is successfully configured, the state of the processing unit, the resource utilization rate of the processing unit and the address change of the processing unit.
4. The method of claim 1, further comprising changing the task processing path based on at least one of: resource utilization of the processing unit; a state of the processing unit; a network load; a change in task requirements.
5. The method of claim 1, wherein the type of processing unit comprises a processing board, a DSP, a CPU, a memory, a hard disk, a virtual memory.
6. The method of claim 1, wherein the task requirements include one or more of task priority, bandwidth, latency, throughput, number of carriers.
7. The method according to claim 1, wherein in step 2, the corresponding destination address of each processing unit is looked up by a pre-configured address table of the processing unit.
8. A system for constructing task processing paths from a plurality of processing resource pools, wherein the processing resource pools are connected using a switching matrix or a switching network, the system comprising:
determining a task processing path based on task requirements, wherein the task processing path comprises a processing unit selected from the plurality of processing resource pools, and the processing unit refers to a functional unit which comprises hardware and has certain processing capacity;
means for searching for a corresponding destination address of the processing unit based on the task processing path;
and means for sending a notification message to the processing unit, where the notification message includes a destination address to which a data stream of the processing unit is to be sent, where the corresponding destination address of the processing unit and the destination address to which the data stream of the processing unit is to be sent both refer to an address of a next processing unit in the task processing path to which the data stream of the processing unit needs to be sent after the processing unit completes its own processing task.
9. The system of claim 8, further comprising means for configuring or generating the task requirements.
10. The system according to claim 8 or 9, further comprising a management agent means for uniformly sending notification messages to the processing units and/or populating destination addresses for data streams of the processing units.
CN201611244893.XA 2016-12-29 2016-12-29 Method and system for constructing task processing path Active CN106658520B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201611244893.XA CN106658520B (en) 2016-12-29 2016-12-29 Method and system for constructing task processing path

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201611244893.XA CN106658520B (en) 2016-12-29 2016-12-29 Method and system for constructing task processing path

Publications (2)

Publication Number Publication Date
CN106658520A CN106658520A (en) 2017-05-10
CN106658520B true CN106658520B (en) 2020-11-03

Family

ID=58836695

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201611244893.XA Active CN106658520B (en) 2016-12-29 2016-12-29 Method and system for constructing task processing path

Country Status (1)

Country Link
CN (1) CN106658520B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108111630B (en) * 2018-01-22 2021-11-02 北京奇艺世纪科技有限公司 Zookeeper cluster system and connection method and system thereof

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103023781A (en) * 2012-12-13 2013-04-03 清华大学 Shortest path tree and spanning tree combined energy-saving routing method
CN103391245A (en) * 2013-07-18 2013-11-13 中国人民解放军信息工程大学 Method and device for constructing multi-state routing in network domain as well as router
CN105897584A (en) * 2014-06-09 2016-08-24 华为技术有限公司 Route planning method and controller
CN106314828A (en) * 2016-08-26 2017-01-11 北京遥测技术研究所 Dynamic reconfigurable ground measuring and controlling system

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103023781A (en) * 2012-12-13 2013-04-03 清华大学 Shortest path tree and spanning tree combined energy-saving routing method
CN103391245A (en) * 2013-07-18 2013-11-13 中国人民解放军信息工程大学 Method and device for constructing multi-state routing in network domain as well as router
CN105897584A (en) * 2014-06-09 2016-08-24 华为技术有限公司 Route planning method and controller
CN106314828A (en) * 2016-08-26 2017-01-11 北京遥测技术研究所 Dynamic reconfigurable ground measuring and controlling system

Also Published As

Publication number Publication date
CN106658520A (en) 2017-05-10

Similar Documents

Publication Publication Date Title
CN110830979B (en) Information transmission method and device
CN111052849B (en) Method and apparatus for mobile network interaction proxy
US10701696B2 (en) Transmission configuration method and related product
JP6965442B2 (en) Methods and devices for load information interactions, processors, and storage media
US11490327B2 (en) Method, device, and system for deploying network slice
CN107113188B (en) Layer manager apparatus in cloud-based radio access network and method of operation
KR102239651B1 (en) How and devices to set up a wireless connection
CN108616943A (en) Information transferring method, base station and user equipment
CN110213066A (en) A kind of acquisition methods and relay of slice information
WO2021087778A1 (en) Data processing system, method, and apparatus, device, and readable storage medium
CN109983834A (en) Mixing for handling user equipment affairs discharges
CN109874143B (en) Network slice modification method and device
CN103179600A (en) Multi-mode base band unit and method for managing remote radio unit
CN106658520B (en) Method and system for constructing task processing path
CN114172753A (en) Address reservation method, network equipment and system
CN109151910A (en) A kind of radio resource management method and device
CN109818772B (en) Network performance guarantee method and device
CN102820992A (en) Processing method and device of data packets
CN112690014A (en) Cellular telecommunications network
WO2018119830A1 (en) Method and system for constructing task processing path
CN101179850A (en) Method of implementing non-synchronous wireless links reconfiguration in base station
CN108924066B (en) Message forwarding method and device
US20220376995A1 (en) Method and apparatus for abstracting network resources in a mobile communications network
CN104504130A (en) Method for solving 2PC model single point failure problem and applied to distributive database
CN102026295A (en) Data communication method, device and system in return link

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant