CN110932999A - Service processing method and equipment - Google Patents

Service processing method and equipment Download PDF

Info

Publication number
CN110932999A
CN110932999A CN201811099522.6A CN201811099522A CN110932999A CN 110932999 A CN110932999 A CN 110932999A CN 201811099522 A CN201811099522 A CN 201811099522A CN 110932999 A CN110932999 A CN 110932999A
Authority
CN
China
Prior art keywords
code block
service
traffic
code
idle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201811099522.6A
Other languages
Chinese (zh)
Inventor
程伟强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Mobile Communications Group Co Ltd
China Mobile Communications Ltd Research Institute
Original Assignee
China Mobile Communications Group Co Ltd
China Mobile Communications Ltd Research Institute
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Mobile Communications Group Co Ltd, China Mobile Communications Ltd Research Institute filed Critical China Mobile Communications Group Co Ltd
Priority to CN201811099522.6A priority Critical patent/CN110932999A/en
Priority to PCT/CN2019/106370 priority patent/WO2020057534A1/en
Publication of CN110932999A publication Critical patent/CN110932999A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/70Admission control; Resource allocation
    • H04L47/76Admission control; Resource allocation using dynamic resource allocation, e.g. in-call renegotiation requested by the user or requested by the network in response to changing network conditions
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04JMULTIPLEX COMMUNICATION
    • H04J3/00Time-division multiplex systems
    • H04J3/16Time-division multiplex systems in which the time allocation to individual channels within a transmission cycle is variable, e.g. to accommodate varying complexity of signals, to vary number of channels transmitted
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • H04L47/52Queue scheduling by attributing bandwidth to queues
    • H04L47/525Queue scheduling by attributing bandwidth to queues by redistribution of residual bandwidth

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Mobile Radio Communication Systems (AREA)

Abstract

The embodiment of the invention provides a service processing method and equipment, wherein the method comprises the following steps: identifying idle code blocks from first code block traffic corresponding to a first service; replacing the idle code block with a first code block used by a second service; mixing a second code block used by a first service in the first code block flow and a first code block used by a second service to obtain a second code block flow; and sending the second code block flow obtained by the mixing processing to a receiving end. In the embodiment of the invention, the sending end replaces the idle code block in the first code block flow corresponding to the first service with the first code block used by the second service, mixes the second code block used by the first service and the first code block used by the second service to obtain the second code block flow, and sends the second code block flow to the receiving end. The bandwidth utilization rate of the Flexe system is improved by releasing and reusing the allocated but unused bandwidth resources and simultaneously ensuring that the existing flow in the time division system is not influenced.

Description

Service processing method and equipment
Technical Field
The present invention relates to the wireless field, and in particular, to a method and an apparatus for processing a service.
Background
The FlexE technology is defined in the flexible Ethernet (FlexE) -01.0 of the Optical networking Forum (Optical Internet Forum, OIF), and the FlexE technology is characterized in that the bandwidth can be divided into a plurality of slotted channels, a group of physical layers (PHYs) can be bound, and subrate division can be supported. Flexe runs in the set 64B/66B block calendar slot. The granularity of the calendar slot is 5G, and 20 slots can be divided by an ethernet physical layer (PHY) of 100G.
Currently, FlexE only defines the rates of 10 Gigabit Ethernet (GE), 40GE and n × 25GE, and when bandwidth is allocated to various services, the corresponding peak rate has to be adapted, that is, fixed bandwidth resources are allocated to FlexE users. Then, when there is only light data traffic or no data transmission at all in the services of these FlexE users, a large amount of bandwidth resources are wasted due to fixed bandwidth allocation, resulting in low bandwidth utilization.
As shown in fig. 1, it is assumed that a FlexE user has two services, service 1 and service 2 are referred to as "existing services", and represent a type of service sensitive to delay, for example: augmented Reality (AR), Virtual Reality (VR), and the like. Service 3 is called "new service" and represents a burst traffic service with limited bandwidth transmitted by a conventional store-and-forward method, for example: electronic mail (E-mail), Web page (Web) Web page, and the like. The two services are multiplexed into a calendar slot at a sending end by a flexible Ethernet data selector (FlexE MUX), and then sent to a receiving end by a flexible Ethernet Group (FlexE Group), and the receiving end demultiplexes to an opposite end user by a flexible Ethernet data distributor (FlexE DMUX).
The current FlexE only defines the rates of 10GE, 40GE and n × 25GE, and when bandwidth is allocated to various services, the corresponding peak rate has to be adapted, that is, fixed bandwidth resources are allocated to FlexE users. Then, when there is only light data traffic or no data transmission at all in the services of these FlexE users, a large amount of bandwidth resources are wasted due to fixed bandwidth allocation, resulting in low bandwidth utilization.
Disclosure of Invention
The embodiment of the invention provides a service processing method and equipment, which solve the problem of low bandwidth utilization rate of a Flexe system.
According to a first aspect of the embodiments of the present invention, a method for processing a service is provided, where the method is applied to a sending end, and the method includes: identifying idle code blocks from first code block traffic corresponding to a first service; replacing the idle code block with a first code block used by a second service; mixing a second code block used by the first service and a first code block used by the second service in the first code block flow to obtain a second code block flow; and sending the second code block flow obtained by mixing processing to a receiving end.
Optionally, the idle code block includes: code blocks already allocated to the first traffic but unused by the first traffic, and/or code blocks reserved for the second traffic.
Optionally, the second code block traffic includes: a third code block for indicating the position of all free code blocks of the row in which the third code block is located.
Optionally, the coding scheme of the third code block is a 64B/66B coding scheme, and the third code block includes: and mapping information of the idle code blocks, wherein the mapping information is used for indicating the positions of all the idle code blocks of the row where the third code block is located.
According to a second aspect of the embodiments of the present invention, another service processing method is provided, which is applied to a receiving end, and the method includes: receiving second code block traffic from the transmitting end; sending a second service using the first code block in the second code block flow to a corresponding first user; sending the first service using the second code block in the second code block flow to a corresponding second user; the second code block traffic is obtained by mixing and processing a second code block used by the first service and a first code block used by the second service in first code block traffic, and the first code block used by the second service multiplexes the positions of unused idle code blocks in the first code block traffic.
Optionally, the idle code block includes: code blocks allocated to the first traffic but unused, and/or code blocks reserved for the second traffic.
Optionally, the second code block traffic includes: a third code block for indicating the position of all free code blocks of the row in which the third code block is located.
Optionally, the coding scheme of the third code block is a 64B/66B coding scheme, and the third code block includes: and mapping information of the idle code blocks, wherein the mapping information is used for indicating the positions of all the idle code blocks of the row where the third code block is located.
According to a third aspect of the embodiments of the present invention, there is provided a transmitting end, including: the system comprises a first transceiver and a first processor, wherein the first processor is used for identifying idle code blocks from first code block traffic corresponding to first services; the first processor is further configured to replace the idle code block with a first code block used by a second service; the first processor is further configured to perform hybrid processing on a second code block used by the first service in the first code block traffic and a first code block used by the second service to obtain a second code block traffic; the first transceiver is configured to send the second code block traffic obtained through the hybrid processing to a receiving end.
Optionally, the idle code block includes: code blocks already allocated to the first traffic but unused by the first traffic, and/or code blocks reserved for the second traffic.
Optionally, the second code block traffic includes: a third code block for indicating the position of all free code blocks of the row in which the third code block is located.
Optionally, the coding scheme of the third code block is a 64B/66B coding scheme, and the third code block includes: and mapping information of the idle code blocks, wherein the mapping information is used for indicating the positions of all the idle code blocks of the row where the third code block is located.
According to a fourth aspect of the embodiments of the present invention, there is provided a receiving end, including: the second transceiver is used for receiving second code block flow from a transmitting end; the second transceiver is further configured to send a second service using the first code block in the second code block traffic to the corresponding first user; the second transceiver is further configured to send a first service using a second code block in the second code block traffic to a corresponding second user; the second code block traffic is obtained by mixing and processing a second code block used by the first service and a first code block used by the second service in first code block traffic, and the first code block used by the second service multiplexes the positions of unused idle code blocks in the first code block traffic.
Optionally, the idle code block includes: code blocks allocated to the first traffic but unused, and/or code blocks reserved for the second traffic.
Optionally, the second code block traffic includes: a third code block for indicating the position of all free code blocks of the row in which the third code block is located.
Optionally, the coding scheme of the third code block is a 64B/66B coding scheme, and the third code block includes: and mapping information of the idle code blocks, wherein the mapping information is used for indicating the positions of all the idle code blocks of the row where the third code block is located.
According to a fifth aspect of embodiments of the present invention, there is provided a communication device, comprising a processor, a memory and a computer program stored on the memory and executable on the processor, the computer program, when executed by the processor, implementing the steps of the service processing method according to the first aspect or the steps of the service processing method according to the second aspect.
According to a sixth aspect of embodiments of the present invention, there is provided a computer-readable storage medium, on which a computer program is stored, which, when executed by a processor, implements the steps of the business processing method according to the first aspect, or the business processing method according to the second aspect.
In the embodiment of the invention, the sending end replaces the idle code block in the first code block flow corresponding to the first service with the first code block used by the second service, mixes the second code block used by the first service and the first code block used by the second service to obtain the second code block flow, and sends the second code block flow to the receiving end. The bandwidth utilization rate of the Flexe system is improved by releasing and reusing the allocated but unused bandwidth resources and simultaneously ensuring that the existing flow in the time division system is not influenced.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings used in the description of the embodiments of the present invention will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained based on these drawings without creative efforts.
FIG. 1 is a block diagram of a prior art hard-pipe based Flexe multiplexing system;
fig. 2 is a flowchart of a service processing method according to an embodiment of the present invention;
fig. 3 is a flow chart of 20 code blocks/times according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of 64B/66B code block types according to an embodiment of the invention;
fig. 5 is a mapping diagram of idle code block information according to an embodiment of the present invention;
fig. 6 is a second flowchart of a service processing method according to an embodiment of the present invention;
FIG. 7 is a block diagram of a system for implementing hybrid multiplexing on Flexe according to an embodiment of the present invention;
fig. 8 is a schematic structural diagram of a transmitting end according to an embodiment of the present invention;
fig. 9 is a schematic structural diagram of a receiving end according to an embodiment of the present invention;
fig. 10 is a schematic structural diagram of a communication device according to an embodiment of the present invention.
Detailed Description
In order to make the technical problems, technical solutions and advantages of the present invention more apparent, the following detailed description is given with reference to the accompanying drawings and specific embodiments.
The techniques described herein are not limited to Long Time Evolution (LTE)/LTE Evolution (LTE-Advanced) systems, and may also be used for various wireless communication systems, such as Code Division Multiple Access (CDMA), Time Division Multiple Access (TDMA), Frequency Division Multiple Access (FDMA), Orthogonal Frequency Division Multiple Access (OFDMA), Single-carrier Frequency Division Multiple Access (SC-FDMA), and other systems, for example: a fifth generation mobile communication (5th-generation, 5G) system and a subsequent evolution communication system.
The terms "system" and "network" are often used interchangeably. CDMA systems may implement Radio technologies such as CDMA2000, Universal Terrestrial Radio Access (UTRA), and so on. UTRA includes Wideband CDMA (Wideband Code Division Multiple Access, WCDMA) and other CDMA variants. TDMA systems may implement radio technologies such as Global System for Mobile communications (GSM). The OFDMA system may implement radio technologies such as Ultra Mobile Broadband (UMB), evolved-UTRA (E-UTRA), IEEE 802.11(Wi-Fi), IEEE 802.16(WiMAX), IEEE 802.20, Flash-OFDM, etc. UTRA and E-UTRA are parts of the Universal Mobile Telecommunications System (UMTS). LTE and higher LTE (e.g., LTE-A) are new UMTS releases that use E-UTRA. UTRA, E-UTRA, UMTS, LTE-A, and GSM are described in literature from an organization named "third Generation Partnership Project" (3 GPP). CDMA2000 and UMB are described in documents from an organization named "third generation partnership project 2" (3GPP 2). The techniques described herein may be used for both the above-mentioned systems and radio technologies, as well as for other systems and radio technologies.
The terms first, second and the like in the description and in the claims of the present invention are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are, for example, capable of operation in sequences other than those illustrated or otherwise described herein.
Referring to fig. 2, an embodiment of the present invention provides a service processing method, which is applied to a sending end, and includes the following specific steps:
step 201: identifying idle code blocks from first code block traffic corresponding to a first service;
in the embodiment of the present invention, the first service is a type of service sensitive to delay, for example: service 1 and service 2 in fig. 1, i.e., "existing services".
The idle code block includes: code blocks already allocated to the first traffic but unused by the first traffic, and/or code blocks reserved for the second traffic.
Step 202: replacing the idle code block with a first code block used by a second service;
in this embodiment of the present invention, the second service is a burst traffic service with limited bandwidth, which is transmitted by a conventional store-and-forward method, for example: service 3 in fig. 1, i.e., "new service".
Step 203: mixing a second code block used by a first service in the first code block flow and a first code block used by a second service to obtain a second code block flow;
in the embodiment of the present invention, the second code block traffic further includes a third code block, and the third code block is used to indicate the positions of all the idle code blocks in the row where the third code block is located.
Specifically, referring to fig. 3, the flow rate of 20 code blocks/second is taken as an example, and there are 20 code blocks in each row, wherein the code block labeled "D/C" is the second code block; the code blocks marked with "+" and "+" are idle code blocks in the first code block traffic, wherein the code block marked with "+" represents a code block reserved for the second service, and the code block marked with "+" represents a code block allocated to the first service but unused by the first service; the code block labeled "#" is the third code block. During the hybrid process, both the code blocks marked "+" and "+" are filled with the first code block used by the second traffic.
Further, referring to fig. 4, a 64B/66B coding scheme is shown, which may be used for coding the third block. The bits marked with oblique lines in the regions D1, D2, and D3 in the figure can store the mapping information of the idle code blocks, which is used to indicate the positions of all the idle code blocks in the row where the third code block is located.
Further, referring to fig. 5, mapping information of an idle code block is shown. The mapping information of each row may be placed in a code block, as denoted "#" in fig. 3, for indicating the position of a free code block. Where each number corresponds to each code block in fig. 3, "1" denotes a second code block used by the first service, "0" denotes a first code block used by the second service, and "x" denotes a code block that has been allocated to the first service but is not used by the first service.
Step 204: and sending the second code block flow obtained by the mixing processing to a receiving end.
In the embodiment of the invention, the sending end replaces the idle code block in the first code block flow corresponding to the first service with the first code block used by the second service, mixes the second code block used by the first service and the first code block used by the second service to obtain the second code block flow, and sends the second code block flow to the receiving end. The bandwidth utilization rate of the Flexe system is improved by releasing and reusing the allocated but unused bandwidth resources and simultaneously ensuring that the existing flow in the time division system is not influenced.
Referring to fig. 6, an embodiment of the present invention provides another service processing method, which is applied to a receiving end, and includes the following specific steps:
step 601: receiving second code block traffic from the transmitting end;
in the embodiment of the invention, the second code block flow is obtained by mixing the second code block used by the first service in the first code block flow and the first code block used by the second service, and the first code block used by the second service multiplexes the position of the unused idle code block in the first code block flow.
The manner of generating the second code block traffic at the transmitting end may refer to the description in step 201 to step 203 in fig. 2, and is not described again here.
Step 602: sending a second service using the first code block in the second code block flow to a corresponding first user;
step 603: and sending the first service using the second code block in the second code block flow to the corresponding second user.
In the embodiment of the invention, a sending end mixes a second code block used by a first service and a first code block used by a second service to obtain a second code block flow, and sends the second code block flow to a receiving end, and the receiving end respectively sends the second service using the first code block and the first service using the second code block in the second code block flow to a corresponding first user and a corresponding second user. The bandwidth utilization rate of the Flexe system is improved by releasing and reusing the allocated but unused bandwidth resources and simultaneously ensuring that the existing flow in the time division system is not influenced.
Referring to fig. 7, an embodiment of the present invention provides a system flow for implementing hybrid multiplexing on FlexE. FlexEMUX multiplexes "existing services" that are delay sensitive (e.g., service 1 and service 2 in the figure) and then multiplexes by a Hybrid multiplexer (Hybrid MUX) along with "added services" (e.g., service 3 in the figure). The hybrid MUX can identify the idle codeblocks in the codeblock traffic from the FlexE MUX and then replace the idle codeblocks with "new services". And then the sending end sends the data to the receiving end through a Flexe Group. The receiving end has a symmetrical structure, and a Hybrid demultiplexer (Hybrid DMUX) firstly demultiplexes the ' new service ' and transmits the demultiplexed ' service to a FlexE opposite-end user. And then the rest 'existing service' is transmitted to a corresponding FlexE opposite-end user through the FlexE DMUX.
Referring to fig. 8, an embodiment of the present invention provides a transmitting end 800, including a first transceiver 801 and a first processor 802;
the first processor 802 is configured to identify an idle code block from first code block traffic corresponding to a first service;
the first processor 802 is further configured to replace the idle code block with a first code block used by a second service;
the first processor 802 is further configured to perform hybrid processing on a second code block used by the first service in the first code block traffic and a first code block used by the second service to obtain a second code block traffic;
the first transceiver 801 is configured to send the second code block traffic obtained through the hybrid processing to a receiving end.
Optionally, the idle code block includes: code blocks already allocated to the first traffic but unused by the first traffic, and/or code blocks reserved for the second traffic.
Optionally, the second code block traffic includes: a third code block for indicating the position of all free code blocks of the row in which the third code block is located.
Optionally, the coding scheme of the third code block is a 64B/66B coding scheme, and the third code block includes: and mapping information of the idle code blocks, wherein the mapping information is used for indicating the positions of all the idle code blocks of the row where the third code block is located.
In the embodiment of the invention, the sending end replaces the idle code block in the first code block flow corresponding to the first service with the first code block used by the second service, mixes the second code block used by the first service and the first code block used by the second service to obtain the second code block flow, and sends the second code block flow to the receiving end. The bandwidth utilization rate of the Flexe system is improved by releasing and reusing the allocated but unused bandwidth resources and simultaneously ensuring that the existing flow in the time division system is not influenced.
Referring to fig. 9, an embodiment of the present invention provides a receiving end 900, including: a second transceiver 901 and a second processor 902;
the second transceiver 901 is configured to receive a second code block traffic from a transmitting end;
the second transceiver 901 is further configured to send a second service using the first code block in the second code block traffic to the corresponding first user;
the second transceiver 901 is further configured to send the first service using the second code block in the second code block traffic to the corresponding second user;
the second code block traffic is obtained by mixing and processing a second code block used by the first service and a first code block used by the second service in first code block traffic, and the first code block used by the second service multiplexes the positions of unused idle code blocks in the first code block traffic.
Optionally, the idle code block includes: code blocks allocated to the first traffic but unused, and/or code blocks reserved for the second traffic.
Optionally, the second code block traffic includes: a third code block for indicating the position of all free code blocks of the row in which the third code block is located.
Optionally, the coding scheme of the third code block is a 64B/66B coding scheme, and the third code block includes: and mapping information of the idle code blocks, wherein the mapping information is used for indicating the positions of all the idle code blocks of the row where the third code block is located.
In the embodiment of the invention, a sending end mixes a second code block used by a first service and a first code block used by a second service to obtain a second code block flow, and sends the second code block flow to a receiving end, and the receiving end respectively sends the second service using the first code block and the first service using the second code block in the second code block flow to a corresponding first user and a corresponding second user. The bandwidth utilization rate of the Flexe system is improved by releasing and reusing the allocated but unused bandwidth resources and simultaneously ensuring that the existing flow in the time division system is not influenced.
Referring to fig. 10, another communication device 1000 is provided in an embodiment of the present invention, including: a processor 1001, a transceiver 1002, a memory 1003, and a bus interface.
Among other things, the processor 1001 may be responsible for managing the bus architecture and general processing. The memory 1003 may store data used by the processor 1001 in performing operations.
In this embodiment of the present invention, the communication device 1000 may further include: a computer program stored on the memory 1003 and executable on the processor 1001, which when executed by the processor 1001, performs the steps of the method provided by an embodiment of the present invention.
In fig. 10, the bus architecture may include any number of interconnected buses and bridges, with one or more processors represented by processor 1001 and various circuits of memory represented by memory 1003 being linked together. The bus architecture may also link together various other circuits such as peripherals, voltage regulators, power management circuits, and the like, which are well known in the art, and therefore, will not be described any further in connection with embodiments of the present invention. The bus interface provides an interface. The transceiver 1002 may be a number of elements including a transmitter and a receiver that provide a means for communicating with various other apparatus over a transmission medium.
An embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program implements each process of the foregoing method for network access, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here. The computer-readable storage medium may be a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
It should be appreciated that reference throughout this specification to "one embodiment" or "an embodiment" means that a particular feature, structure or characteristic described in connection with the embodiment is included in at least one embodiment of the present invention. Thus, the appearances of the phrases "in one embodiment" or "in an embodiment" in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments.
In various embodiments of the present invention, it should be understood that the sequence numbers of the above-mentioned processes do not mean the execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation on the implementation process of the embodiments of the present invention.
In the embodiments provided herein, it should be understood that "B corresponding to a" means that B is associated with a from which B can be determined. It should also be understood that determining B from a does not mean determining B from a alone, but may be determined from a and/or other information.
In the several embodiments provided in the present application, it should be understood that the disclosed method and apparatus may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may be physically included alone, or two or more units may be integrated into one unit. The integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional unit.
The integrated unit implemented in the form of a software functional unit may be stored in a computer readable storage medium. The software functional unit is stored in a storage medium and includes several instructions to enable a computer device (which may be a personal computer, a server, or a network-side device) to perform some steps of the transceiving method according to various embodiments of the present invention. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
While the foregoing is directed to the preferred embodiment of the present invention, it will be understood by those skilled in the art that various changes and modifications may be made without departing from the spirit and scope of the invention as defined in the appended claims.

Claims (18)

1. A service processing method is applied to a sending end, and is characterized in that the method comprises the following steps:
identifying idle code blocks from first code block traffic corresponding to a first service;
replacing the idle code block with a first code block used by a second service;
mixing a second code block used by the first service and a first code block used by the second service in the first code block flow to obtain a second code block flow;
and sending the second code block flow obtained by mixing processing to a receiving end.
2. The method of claim 1, wherein the idle code block comprises: code blocks already allocated to the first traffic but unused by the first traffic, and/or code blocks reserved for the second traffic.
3. The method of claim 1, wherein the second code block flow comprises: a third code block for indicating the position of all free code blocks of the row in which the third code block is located.
4. The method according to claim 3, wherein the third code block is encoded in a 64B/66B coding scheme, and wherein the third code block comprises: and mapping information of the idle code blocks, wherein the mapping information is used for indicating the positions of all the idle code blocks of the row where the third code block is located.
5. A service processing method is applied to a receiving end, and is characterized in that the method comprises the following steps:
receiving second code block traffic from the transmitting end;
sending a second service using the first code block in the second code block flow to a corresponding first user;
sending the first service using the second code block in the second code block flow to a corresponding second user;
the second code block traffic is obtained by mixing and processing a second code block used by the first service and a first code block used by the second service in first code block traffic, and the first code block used by the second service multiplexes the positions of unused idle code blocks in the first code block traffic.
6. The method of claim 5, wherein the idle code block comprises: code blocks allocated to the first traffic but unused, and/or code blocks reserved for the second traffic.
7. The method of claim 5, wherein the second code block flow comprises: a third code block for indicating the position of all free code blocks of the row in which the third code block is located.
8. The method according to claim 7, wherein the third code block is encoded in a 64B/66B coding scheme, and wherein the third code block comprises: and mapping information of the idle code blocks, wherein the mapping information is used for indicating the positions of all the idle code blocks of the row where the third code block is located.
9. A transmitting end, comprising: a first transceiver and a first processor, wherein,
the first processor is configured to identify an idle code block from a first code block traffic corresponding to a first service;
the first processor is further configured to replace the idle code block with a first code block used by a second service;
the first processor is further configured to perform hybrid processing on a second code block used by the first service in the first code block traffic and a first code block used by the second service to obtain a second code block traffic;
the first transceiver is configured to send the second code block traffic obtained through the hybrid processing to a receiving end.
10. The transmitting end of claim 9, wherein the idle code block comprises: code blocks already allocated to the first traffic but unused by the first traffic, and/or code blocks reserved for the second traffic.
11. The transmitting end of claim 9, wherein the second code block flow comprises: a third code block for indicating the position of all free code blocks of the row in which the third code block is located.
12. The transmitter of claim 11, wherein the third code block is encoded in a 64B/66B coding scheme, and the third code block comprises: and mapping information of the idle code blocks, wherein the mapping information is used for indicating the positions of all the idle code blocks of the row where the third code block is located.
13. A receiving end, comprising: a second transceiver and a second processor, wherein,
the second transceiver is configured to receive a second code block traffic from a transmitting end;
the second transceiver is further configured to send a second service using the first code block in the second code block traffic to the corresponding first user;
the second transceiver is further configured to send a first service using a second code block in the second code block traffic to a corresponding second user;
the second code block traffic is obtained by mixing and processing a second code block used by the first service and a first code block used by the second service in first code block traffic, and the first code block used by the second service multiplexes the positions of unused idle code blocks in the first code block traffic.
14. The receiving end according to claim 13, wherein the idle code block comprises: code blocks allocated to the first traffic but unused, and/or code blocks reserved for the second traffic.
15. The receiving end of claim 13, wherein the second code block flow comprises: a third code block for indicating the position of all free code blocks of the row in which the third code block is located.
16. The receiving end of claim 15, wherein the coding scheme of the third code block is a 64B/66B coding scheme, and the third code block comprises: and mapping information of the idle code blocks, wherein the mapping information is used for indicating the positions of all the idle code blocks of the row where the third code block is located.
17. A communication device comprising a processor, a memory and a computer program stored on the memory and executable on the processor, the computer program, when executed by the processor, implementing the steps of a traffic processing method according to any of claims 1 to 4 or the steps of a traffic processing method according to any of claims 5 to 8.
18. A computer-readable storage medium, characterized in that a computer program is stored thereon, which computer program, when being executed by a processor, carries out the steps of the traffic processing method according to any one of claims 1 to 4, or the steps of the traffic processing method according to any one of claims 5 to 8.
CN201811099522.6A 2018-09-20 2018-09-20 Service processing method and equipment Pending CN110932999A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201811099522.6A CN110932999A (en) 2018-09-20 2018-09-20 Service processing method and equipment
PCT/CN2019/106370 WO2020057534A1 (en) 2018-09-20 2019-09-18 Service processing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811099522.6A CN110932999A (en) 2018-09-20 2018-09-20 Service processing method and equipment

Publications (1)

Publication Number Publication Date
CN110932999A true CN110932999A (en) 2020-03-27

Family

ID=69856166

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811099522.6A Pending CN110932999A (en) 2018-09-20 2018-09-20 Service processing method and equipment

Country Status (2)

Country Link
CN (1) CN110932999A (en)
WO (1) WO2020057534A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113438185A (en) * 2021-06-24 2021-09-24 新华三技术有限公司 Bandwidth allocation method, device and equipment
WO2022062930A1 (en) * 2020-09-24 2022-03-31 华为技术有限公司 Code block stream processing method and device

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106788855A (en) * 2015-11-23 2017-05-31 华为技术有限公司 The optical transfer network bearing method and device of a kind of flexible Ethernet service
CN107566075A (en) * 2016-07-01 2018-01-09 华为技术有限公司 A kind of method, apparatus and network system for sending and receiving business
CN108092739A (en) * 2016-11-23 2018-05-29 华为技术有限公司 The transmission method and device of business
CN108347317A (en) * 2017-01-22 2018-07-31 华为技术有限公司 A kind of transmission method of business, the network equipment and network system

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9532347B2 (en) * 2007-10-18 2016-12-27 Samsung Electronics Co., Ltd. Method and apparatus for allocating resources in a wireless communication system
JP2018046373A (en) * 2016-09-13 2018-03-22 富士通株式会社 Transmission equipment and transmission method
CN108242969B (en) * 2016-12-23 2021-04-20 华为技术有限公司 Transmission rate adjusting method and network equipment

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106788855A (en) * 2015-11-23 2017-05-31 华为技术有限公司 The optical transfer network bearing method and device of a kind of flexible Ethernet service
CN107566075A (en) * 2016-07-01 2018-01-09 华为技术有限公司 A kind of method, apparatus and network system for sending and receiving business
CN108092739A (en) * 2016-11-23 2018-05-29 华为技术有限公司 The transmission method and device of business
CN108347317A (en) * 2017-01-22 2018-07-31 华为技术有限公司 A kind of transmission method of business, the network equipment and network system

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022062930A1 (en) * 2020-09-24 2022-03-31 华为技术有限公司 Code block stream processing method and device
CN113438185A (en) * 2021-06-24 2021-09-24 新华三技术有限公司 Bandwidth allocation method, device and equipment

Also Published As

Publication number Publication date
WO2020057534A1 (en) 2020-03-26

Similar Documents

Publication Publication Date Title
US10931392B2 (en) Service processing method and apparatus
US11064498B2 (en) Resource allocation method and device in communications system
CN109729588B (en) Service data transmission method and device
EP3285444B1 (en) Method and device for optical transport network to bear flex ethernet service
US11395291B2 (en) Allocating transmission resources in communication networks that provide low latency services
CN111727589B (en) Method and device for configuring Flex Ethernet node
CN109672560B (en) Flexible Ethernet management channel expansion method and device
JP6951361B2 (en) Reference signal transmission methods, devices, and systems
US11546894B2 (en) Uplink control information transmitting method, uplink control information receiving method, terminal, base station and apparatus
CN109041114B (en) Method for sending cache status report and user equipment
WO2018112890A1 (en) Data transmission method, network device and terminal device
JP7298969B2 (en) Communication method and network device
CN109391359A (en) Method, the network equipment and terminal device for data transmission
CN110932999A (en) Service processing method and equipment
US11412508B2 (en) Data transmission method and device
CN108809503B (en) Reference signal pattern transmission method and device
WO2021017888A1 (en) Service data transmission method and communication apparatus
CN115022968A (en) Resource allocation indicating method, resource allocation obtaining method, base station and user terminal
WO2017173881A1 (en) Reference signal transmission method, device and system
CN114697210B (en) Network performance guarantee method and device
US11381334B2 (en) Service signal transmission method and apparatus
CN114828251A (en) Resource allocation method, terminal and network side equipment
CN110505659B (en) Method for negotiating data transmission rate, receiving end and transmitting end
CN112770342A (en) Service data transmission method and device, computer equipment and storage medium
CN110932923B (en) Method and equipment for calculating bandwidth utilization rate

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20200327