CN111342983B - Dynamic member processing method, system, receiving terminal, management entity and storage medium - Google Patents

Dynamic member processing method, system, receiving terminal, management entity and storage medium Download PDF

Info

Publication number
CN111342983B
CN111342983B CN201811548878.3A CN201811548878A CN111342983B CN 111342983 B CN111342983 B CN 111342983B CN 201811548878 A CN201811548878 A CN 201811548878A CN 111342983 B CN111342983 B CN 111342983B
Authority
CN
China
Prior art keywords
added
original
flexe
group
members
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811548878.3A
Other languages
Chinese (zh)
Other versions
CN111342983A (en
Inventor
冯全全
安康
郭樑
孙宁建
王闯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sanechips Technology Co Ltd
Original Assignee
Sanechips Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sanechips Technology Co Ltd filed Critical Sanechips Technology Co Ltd
Priority to CN201811548878.3A priority Critical patent/CN111342983B/en
Priority to PCT/CN2019/109968 priority patent/WO2020125154A1/en
Publication of CN111342983A publication Critical patent/CN111342983A/en
Application granted granted Critical
Publication of CN111342983B publication Critical patent/CN111342983B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04JMULTIPLEX COMMUNICATION
    • H04J3/00Time-division multiplex systems
    • H04J3/16Time-division multiplex systems in which the time allocation to individual channels within a transmission cycle is variable, e.g. to accommodate varying complexity of signals, to vary number of channels transmitted
    • H04J3/1605Fixed allocated frame structures
    • H04J3/1652Optical Transport Network [OTN]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/08Configuration management of networks or network elements
    • H04L41/0896Bandwidth or capacity management, i.e. automatically increasing or decreasing capacities
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/08Configuration management of networks or network elements
    • H04L41/0893Assignment of logical groups to network elements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04JMULTIPLEX COMMUNICATION
    • H04J2203/00Aspects of optical multiplex systems other than those covered by H04J14/05 and H04J14/07
    • H04J2203/0001Provisions for broadband connections in integrated services digital network using frames of the Optical Transport Network [OTN] or using synchronous transfer mode [STM], e.g. SONET, SDH
    • H04J2203/0073Services, e.g. multimedia, GOS, QOS
    • H04J2203/0082Interaction of SDH with non-ATM protocols
    • H04J2203/0085Support of Ethernet

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Communication Control (AREA)

Abstract

The embodiment of the invention discloses a member dynamic processing method, a system, a receiving end, a management entity and a storage medium in flexible Ethernet Flexe, wherein the method comprises the following steps: according to a group to which a member to be added belongs, acquiring positions of head overhead bytes OH of cache data respectively corresponding to the member to be added and an original member in the group, and determining a relative position relation between the OH position of the member to be added and the OH position of the original member; and when the relative position relation is determined not to meet the set condition, adjusting the reading rate of the member to be added or the original member.

Description

Member dynamic processing method, system, receiving end, management entity and storage medium
Technical Field
The invention relates to the field of data communication, in particular to a member dynamic processing method, a system, a receiving end, a management entity and a storage medium in flexible Ethernet FlexE.
Background
With the rapid development of the internet, the traditional standard ethernet transmission cannot meet the requirement of high-rate service transmission. Ethernet flexible network (FLexE) is a new resilient ethernet client interface, which is a medium between the media access control sublayer (MAC) and the Physical Coding Sublayer (PCS) for the distribution of client traffic. Group (Group) in FlexE is an important concept that can bind ethernet Port Physical Layer (PHY) members of 1 to n entities. The members bound in the Group can bear the same service, can bear different services, and can add or delete members according to the needs of network transmission to adapt to the needs of increasing or decreasing service ports.
More than 1 service port is often carried among the members in the Group, and these services are placed at fixed locations on each PHY member according to the assigned FlexE slots. In FlexE, a header OverHead byte (OH) is added to mark this fixed position, and the time when each member in the Group inserts OH is the same. The service corresponding to each fixed time slot after the OH is unique and fixed, and in application, as long as the position of the OH is found, the service can be recovered from the time slot, that is, the received data can be distinguished according to the service. In practical application, one service of the MAC layer is often distributed to a plurality of members in a Group and reaches a receiving end through long-distance transmission. In order to restore the relationship between the service and the time slot, the receiving end needs to find the position of the OH of the member in the group, and the relationship between the data and the service can be marked from the position. Due to the influence of the factors such as the difference of transmission medium materials, the distance, the environment and the like when the physical PHY of the entity transmits, a physical delay (skew) exists when an adult arrives at a receiving end in the Group. In order to recover the same OH position, the receiving end needs to deskew (data is put in a buffer, and when all OH comes, the data is read together) for PHY members in a group, remove the delay between different PHY members, and ensure that the OH is at the same position at the same time.
In the network transmission process, when transmission services need to be added or transmission services need to be reduced by saving bandwidth, PHY members need to be added or deleted. When the PHY member is dynamically processed, such as adding or deleting members, the PHY member in the Group needs to be dednew, so that the normal transmission service is affected, and data transmission is abnormal.
Disclosure of Invention
In order to solve the existing technical problem, embodiments of the present invention provide a member dynamic processing method, a system, a receiving end, a management entity, and a storage medium in a flexible ethernet FlexE, which can complete dynamic processing of members, such as operations of adding, deleting, and the like, without affecting data services.
In order to achieve the above purpose, the technical solution of the embodiment of the present invention is realized as follows:
a dynamic member processing method in a flexible Ethernet Flexe comprises the following steps: according to a group to which a member to be added belongs, acquiring positions of head overhead bytes OH of cache data respectively corresponding to the member to be added and an original member in the group, and determining a relative position relationship between the OH position of the member to be added and the OH position of the original member; and when the relative position relation is determined not to meet the set condition, adjusting the reading rate of the member to be added or the original member.
A receiving end comprising a processor and a memory for storing a computer program capable of running on the processor; when the processor is used for running the computer program, the member dynamic processing method applied to the flexible ethernet FlexE at the receiving end provided by the embodiment of the present application is executed.
A dynamic member processing method in a flexible Ethernet Flexe comprises the following steps: configuring the Flexe time slot table, and configuring the service of the member to be deleted into a null service according to the position of a head overhead byte OH of the member to be deleted; and eliminating the members to be deleted from the reading condition of the cache data of the group to which the members to be deleted belong.
A flexible ethernet management entity, FlexE SHIM, comprising a processor and a memory for storing a computer program capable of running on the processor; when the processor is used for running the computer program, the member dynamic processing method in the flexible ethernet FlexE applied to the management entity provided by the embodiment of the present application is executed.
A dynamic processing system for members in a flexible Ethernet Flexe comprises a flexible Ethernet management entity Flexe SHIM, a sending end and a receiving end, wherein the Flexe SHIM is used for configuring a group to which a member to be added belongs; the sending end is used for inserting a head overhead byte OH according to the position of the service distributed to the member to be added and according to a set interval; the receiving end is configured to execute the member dynamic processing method applied to the receiving end in the flexible ethernet FlexE according to the embodiment of the present application.
A storage medium, wherein the storage medium stores executable instructions, and when executed by a processor, the executable instructions implement a member dynamic processing method in flexible ethernet FlexE according to any embodiment of the present application.
The method, the system, the receiving end, the management entity and the storage medium for dynamically processing the members in the flexible ethernet FlexE provided by the embodiments above include: when a member is newly added, according to a group to which the member to be added belongs, acquiring positions of head overhead bytes OH of cache data corresponding to the member to be added and an original member in the group respectively, determining a relative position relationship between the OH position of the member to be added and the OH position of the original member, and adjusting the reading rate of the member to be added or the original member when the relative position relationship does not meet a set condition. When a member is deleted, a Flexe time slot table is configured, the service of the member to be deleted is configured to be a null service according to the position of a head overhead byte OH of the member to be deleted, and the member to be deleted is removed under the reading condition of the cache data of the group to which the member to be deleted belongs.
Drawings
Fig. 1 is a schematic view of an application scenario of a member dynamic processing method in FlexE according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of an application scenario of a member dynamic processing method in Flexe according to another embodiment of the present invention;
FIG. 3 is a flowchart illustrating a method for dynamically handling members in Flexe according to an embodiment of the invention;
FIG. 4 is a diagram illustrating an embodiment of determining the advance of an added member relative to an original member based on the count values of read data corresponding to the original member and the added member;
FIG. 5 is a diagram illustrating an embodiment of determining that the members to be added are behind relative to the original members based on the count values of the read data corresponding to the original members and the members to be added;
FIG. 6 is a flow chart illustrating a method for dynamically processing members in Flexe according to another embodiment of the present invention;
FIG. 7 is a flowchart illustrating a method for dynamically processing a member in Flexe according to an alternative embodiment of the present invention;
FIG. 8 is a flow chart illustrating a method for dynamic handling of members in Flexe according to another alternative embodiment of the present invention;
FIG. 9 is a flowchart illustrating a method for dynamically processing a member in Flexe according to yet another alternative embodiment of the present invention;
fig. 10 is a schematic structural diagram of a receiving end according to an embodiment of the present invention;
FIG. 11 is a schematic structural diagram of Flexe SHIM according to an embodiment of the present invention;
fig. 12 is a schematic structural diagram of a member dynamic processing system in FlexE according to an embodiment of the present invention.
Detailed Description
The technical solution of the present invention is further described in detail with reference to the drawings and specific embodiments. Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. The terminology used in the description of the invention herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used herein, the term "and/or" includes any and all combinations of one or more of the associated listed items.
In the following description, reference is made to the expression "some embodiments" which describes a subset of all possible embodiments, but it is understood that "some embodiments" may be the same subset or different subsets of all possible embodiments, and may be combined with each other without conflict.
Before further detailed description of the present invention, terms and expressions referred to in the embodiments of the present invention are described, and the terms and expressions referred to in the embodiments of the present invention are applicable to the following explanations.
1) Flexible Ethernet (FlexE), an interface technology for a bearer network to implement service isolation bearer and network fragmentation. The flexible ethernet technology increases a flexible ethernet management entity (FlexE Shim) in an intermediate layer between an ethernet media access control sublayer L2 (MAC)/an ethernet port physical layer L1(PHY) by lightweight enhancement of ethernet, and the Flex Shim schedules and distributes data of a plurality of Client ports (clients) to a plurality of different member link subchannels in a time slot manner based on a time division multiplexing distribution mechanism. Taking 100GE pipeline as an example, the FlexE Shim can be divided into 20 subchannels with 5G rate, and each Client side interface can designate one or more member link subchannels to be used, so as to implement service isolation.
2) Head overhead byte (OH), the net (negative) load is relative to the overhead. Because the network is a hierarchical structure, when data is converted from top to bottom, a certain number of OH's are added to each layer, and the original data is called payload (payload).
For the service data added with OH, each service data is composed of a header and data, the header is overhead, and the following data is payload of the service data.
3) Group refers to binding a plurality of interfaces encapsulating the same link layer protocol together through link binding to form a logical data link, i.e. a Group.
Member links within the same group may include the following roles: and (3) traffic load sharing: ingress/egress traffic may be shared among a plurality of member interfaces; and (3) increasing the bandwidth: the bandwidth of the link binding interface is the sum of the bandwidths of all available member interfaces; the connection reliability is improved: when a certain member interface has a fault, the flow can be automatically switched to other available member interfaces, so that the connection reliability of the whole binding link is improved.
4) The term "member" refers to a link member included in the same group.
5) The corrected delay (skew) means that a physical delay skew exists when a member in the same group reaches a receiving end due to the influence of factors such as transmission medium material difference, distance, environment and the like in the process of transmitting data by group members, and the corrected delay is used for removing the delay among different members.
In the network transmission process, when transmission service needs to be added or bandwidth needs to be saved and transmission service needs to be reduced, members in a group need to be added or deleted. Referring to fig. 1, taking flexible ethernet FlexE as an example, Flex Shim schedules and distributes data of multiple Client ports (clients) to multiple different members according to a time slot manner based on a time division multiplexing distribution mechanism. The method includes the steps that a plurality of members Phy _ members are bound in a Flexe group, for example, 4 members Phy _ members are bound in the Flexe group, the plurality of Phy _ members jointly bear a plurality of Flexe _ Client interface service data, for example, 10 members Phy _ members, the plurality of Phy _ members all run in full flow, one or more Flexe _ clients need to be added in the group due to transmission requirements of a communication network, for example, a newly added Client (New added Client) is added, or one or more port flows in the plurality of Flexe _ clients are increased, and the plurality of Phy _ members can be added in the group at the moment in the case that the plurality of Phy _ members run in full flow without redundant bandwidth to bear newly increased Client services, for example, a New added member (New added Phy) mode.
Referring to fig. 2, still taking FlexE as an example, a plurality of Phy _ members, such as 4 Phy _ members, are bundled in the FlexE group, and the Phy _ members collectively carry data of a plurality of clients, such as 10 clients. In the network transmission process, the data traffic of the Client is reduced or some service ports no longer have data stream transmission, and a Flexe _ Client needs to be deleted, for example, a service port (Delete initial Client) is deleted, the total traffic is reduced, at this time, one or more members in the group can be removed according to the size of the reduced traffic, for example, a method of deleting an original member (Delete initial ph), and the deleted member can be called after being newly added by other groups.
When the above-mentioned members are added or deleted, the receiving end needs to deskew the members in the group again, so that the normal transmission service may be affected.
Referring to fig. 3, a method for dynamically processing members in a flexible ethernet FlexE according to an embodiment of the present application is applicable to a receiving end, and the method includes the following steps:
step 101, according to a group to which a member to be added belongs, acquiring positions of overhead bytes OH of cache data respectively corresponding to the member to be added and an original member in the group, and determining a relative position relationship between the OH position of the member to be added and the OH position of the original member;
here, the member to be added refers to a New member that needs to be added to the group according to the transmission requirement of the communication network, such as the member New add phy shown in fig. 1. It should be noted that the new member added in the group may be a completely new link member, or a link member in the group that was in a failure state before reverting back to the link member in the group through recovery. When a new member is dynamically added to the FlexE, the FlexE slot table may be configured through the flexshim first, and the Group to which the member to be added belongs may be configured. And the sending end places the service of the Client interface carried by the Group according to the Flexe time slot table configured by the Flex Shim and the distributed Flexe time slot on the members in the Group according to a fixed position, and marks the position by inserting a head overhead byte OH. And configuring the Group to which the member to be added belongs through Flex Shim, and inserting the OH into the sending end according to the position of the service distributed to the member to be added and the set interval. By set interval is meant that an OH is periodically inserted in the data, e.g., every 1023 × 20 × 8 data blocks according to the corresponding protocol.
The receiving end writes the corresponding OH and subsequent data into the cache when the OH of the original member in the Group comes according to the Group to which the member to be added belongs; correspondingly, when the OH of the member to be added in the Group comes, the corresponding OH and the subsequent data are written into the cache. The receiving end obtains the positions of OH of the cached data respectively corresponding to the member to be added and the original member in the group, determines the relative position relationship between the OH position of the member to be added and the OH position of the original member, reads the cached data respectively corresponding to the member to be added and the original member in the group from the cache according to the appointed reading speed, and determines the relative position relationship between the OH position of the member to be added and the OH position of the original member according to the position of OH of the corresponding cached data.
And 103, when the relative position relation is determined not to meet the setting condition, adjusting the reading rate of the member to be added or the original member.
At a sending end, the time for inserting OH into each member in the same Group is the same, and under the condition that no path delay skew exists when the member reaches a receiving end, the OH of the member to be added and the OH of the original member are in the same position at the same time. Here, the set condition means that the positions of OH are the same or that the difference between the relative positions of OH is less than a preset value. And the receiving end determines the relative position relationship between the OH position of the member to be added and the OH position of the original member in the same Group, and when the relative position relationship does not meet the set condition, the receiving end indicates that the data skew of the member to be added and the original member is inconsistent, so that the reading rate of the member to be added or the original member for reading data from the cache is adjusted, and the skew of the member to be added or the original member is correspondingly adjusted. By adjusting the reading rate of the member to be added or the original member reading data from the cache, a mode of data transmission and waiting is realized, and normal transmission of the original service is not influenced.
The member dynamic processing method in Flexe further comprises the following steps: and when the relative position relation is determined to meet the setting condition, the member to be added and the original member realize data synchronization.
The receiving end determines the relative position relationship between the OH position of the member to be added and the OH position of the original member in the same Group, when the relative position relationship meets the set condition, the data skew of the member to be added and the original member is consistent or basically consistent, the OH of the member to be added and the OH of the original member can be ensured to appear simultaneously when the next OH arrives, the member to be added and the original member realize data synchronization, and the dynamic addition of the member to be added to the Group is successful.
In some embodiments, in step 103, when it is determined that the relative position relationship does not satisfy the setting condition, adjusting the reading rate of the member to be added or the original member includes:
when the relative position relation is determined not to meet the set condition, adjusting the reading rate of the member to be added or the original member, and re-determining the relative position relation between the OH position of the member to be added and the OH position of the original member;
and when the relative position relation still does not meet the setting condition, returning to the step of adjusting the reading rate of the member to be added or the original member until the relative position relation meets the setting condition.
Here, the receiving end determines that the relative position relationship does not satisfy the setting condition according to the relative position relationship between the OH position of the to-be-added member and the OH position of the original member in the same Group, and may perform one adjustment or multiple adjustments in the process of adjusting the reading rate of the to-be-added member or the original member, taking the adjustment of the reading rate of the to-be-added member as an example, may adjust the reading rate of the to-be-added member to be one half of the current rate, and re-determine whether the relative position relationship between the OH position of the to-be-added member and the OH position of the original member satisfies the setting condition after the adjustment, and when the relative position relationship still does not satisfy the setting condition, may adjust the reading rate of the to-be-added member again, and re-adjust the reading rate of the to-added member to be one half of the current rate, and re-determining whether the relative position relationship between the OH position of the member to be added and the OH position of the original member meets the set condition after adjustment; and repeating the steps until the relative position relation meets the setting condition.
It should be noted that, in the process of adjusting the reading rate of the to-be-increased member or the original member, the adjustment may be performed according to a set adjustment policy, where the adjustment policy may be determined by multiplying the current reading rate by a certain proportional coefficient; alternatively, the current reading rate may be determined in a manner of decreasing with a certain magnitude; alternatively, the current reading rate may be determined by multiplying the current reading rate by a scaling factor that gradually increases or decreases, and the like.
In the above embodiment of the present application, by gradually adjusting the reading rate of the to-be-added member or the original member, the relative position relationship between the OH position of the to-be-added member and the OH position of the original member gradually tends to meet the setting condition, and it is ensured that the data synchronization between the to-be-added member and the original member is realized on the premise that the original service is not affected by the continuous transmission.
In some embodiments, the adjusting the reading rate of the member to be added or the original member includes:
when the OH position of the member to be added is determined to be advanced relative to the OH position of the original member, reducing the reading rate of the member to be added; or when the OH position of the member to be increased is determined to be lagged relative to the OH position of the original member, the reading rate of the original member is reduced.
Here, when the receiving end determines that the OH position of the member to be added is advanced relative to the OH position of the original member, it indicates that the OH of the member to be added appears in advance, that is, the data skew of the member to be added is smaller than the data skew of the original member, and the member to be added should cache more data to wait for the data of the original member, so that the reading rate of the original member is unchanged, and the reading rate of the newly added member is reduced to reduce the difference between the OH position of the member to be added and the OH position of the original member. Correspondingly, when the receiving end determines that the OH position of the member to be added is behind relative to the OH position of the original member, the OH behind of the member to be added is shown, namely the data skew of the member to be added is larger than that of the original member, and the data skew of the original member needs to be added, so that the reading rate of the newly added member is unchanged, the reading rate of the original member is reduced, and the difference between the OH position of the member to be added and the OH position of the original member is reduced.
Wherein the reducing the read rate of the member to be added comprises: reducing the reading rate of the member to be increased to one half of the current rate; the reducing of the reading rate of the original member comprises: and reducing the reading rate of the original member to be one half of the current rate.
It should be noted that, the reducing the reading rate of the member to be increased and the reducing the reading rate of the original member may also be gradual reduction according to a set adjustment policy, for example, the adjustment policy may be other proportions not limited to the above-mentioned one-half that of reducing the reading rate to the current rate each time; alternatively, the read rate may be lowered by a certain amount each time; alternatively, it may be a reduction of the read rate to the product of the current rate and a scaling factor that is gradually incremented or decremented, and so on. By gradually reducing according to the set adjustment strategy, the relative position relationship between the OH position of the member to be added and the OH position of the original member gradually tends to meet the set condition, and the data synchronization between the member to be added and the original member is realized on the premise that the original service is not influenced to be continuously transmitted.
In some embodiments, the obtaining the positions of the overhead bytes OH of the cache data corresponding to the member to be added and the original member in the group, and determining the relative position relationship between the OH position of the member to be added and the OH position of the original member includes:
reading the cache data corresponding to the member to be added according to a set first reading rate, carrying out first counting on the read data, and resetting the first counting when the position of OH of the data is read;
reading the cache data corresponding to the original member in the group according to a set second reading rate, carrying out second counting on the read data, and resetting the second counting when the position of OH of the data is read;
and determining the relative position relationship between the OH position of the member to be increased and the OH position of the original member according to the first counting and the second counting.
The receiving end is provided with a first counter and a second counter which respectively correspond to the member to be added and the original member, OH appears in corresponding data periodically, the first counter and the second counter respectively calculate the count of read data after the position of OH of cache data corresponding to the member to be added and the original member in the same period, and the relative position relationship between the positions of OH of the member to be added and the original member can be determined by comparing the difference value of the two counts, so that the difference value change of the relative position relationship between the positions of OH of the member to be added and the original member in the same period can be determined according to the positions of OH of the cache data of the member to be added and the original member in the same period, which are correspondingly acquired at different reading rates.
The receiving end writes the OH of the member to be added into the cache when determining that the OH arrives, reads the cache data corresponding to the member to be added according to a set first reading rate when the exclusive storage is not empty, performs first counting on the read data, and clears the counting when the OH is read. And reading cache data corresponding to the original members in the Group from the cache, wherein the condition that the data can be read out can be that OH of all the original members is written into the cache, and the read-out first Group of data needs to be OH identification. Correspondingly, the receiving end reads the cache data corresponding to the original members in the Group according to the set second reading rate, wherein one-time reading comprises pulling one data of each original member from the cache, if the data of one original member is empty, the reading is abandoned, the reading is waited for when the data is not empty, the second count of reading a Group of data every time is increased by 1, and the count is cleared when the OH is waited to be read. The receiving end can determine the relative position relationship between the OH position of the member to be increased and the OH position of the original member by comparing the first counting with the second counting.
In the above embodiment of the present application, by counting data after the position of the OH in each period according to the cache data corresponding to the member to be added and the original member, and by using the rule that the OH appears periodically in the data, when the position of the OH of the data is read, the corresponding count is cleared, so that in the process of adjusting the reading rate of the member to be added or the original member, the corresponding count value of the data after the position of the OH in each period of the cache data corresponding to the member to be added and the original member can be obtained, so as to determine the relative position relationship between the OH position of the member to be added and the OH position of the original member.
In some embodiments, the determining, according to the sizes of the first count and the second count, the relative positional relationship between the OH position of the member to be added and the OH position of the original member includes:
when the first count is zero and the second count is greater than a threshold value, determining that the position of OH of the member to be added is ahead relative to the original member;
and when the first count is zero and the second count is less than or equal to the threshold, determining that the position of the OH of the member to be added lags behind the original member.
Here, the threshold value is determined according to a set interval of OH flag in the data. For example, an OH flag is inserted every 1023 × 20 × 8 blocks, and the threshold may be 1023 × 20 × 8/2. Referring to fig. 4, if the readout data corresponding to the to-be-added member is OH and the count is zero, the count value of the readout data corresponding to the original member is greater than the threshold, for example, when the count value can be in the range of (1023 × 20 × 8 to 1023 × 20 × 8/2), it indicates that the position of the OH of the to-be-added member is advanced relative to the original member; referring to fig. 5, if the read data corresponding to the member to be added is OH and the count is zero, the count value of the read data corresponding to the original member is smaller than the threshold, for example, can be in the range of [1023 × 20 × 8, 2 — 0), which indicates that the position of OH of the member to be added is behind with respect to the original member. Correspondingly and gradually adjusting the reading efficiency of the member to be increased and the original member according to the relative position relationship between the OH position of the member to be increased and the OH position of the original member, so as to reduce the difference value of the data skew between the member to be increased and the original member until the member to be increased and the original member realize data synchronization.
Referring to fig. 6, a method for dynamically processing members in a flexible ethernet FlexE according to an embodiment of the present application may be applied to a flexible ethernet management entity FlexE SHIM, where the method includes the following steps:
step 201, configuring a Flexe time slot table, and configuring the service of the member to be deleted into a null service according to the position of a head overhead byte OH of the member to be deleted;
here, the service carried by each member in the Group is determined by the configuration of the slot table, and the slot table is the service data corresponding to each slot allocated from the OH position. The FlexE SHIM configures a FlexE time slot table, configures the service of the member to be deleted as a null service according to the position of the overhead byte OH of the member to be deleted, which may be that the FlexE SHIM configures the service of the member to be deleted, which needs to be deleted, as a null service, does not carry an effective service, and only keeps an OH mark.
Step 203, removing the members to be deleted from the reading condition of the cache data of the group to which the members to be deleted belong.
Here, the FlexE SHIM removes the member to be deleted, which may be that the FlexE SHIM determines which member the member to be deleted specifically corresponds to according to the configuration of the time slot table, and configures the reading condition for the receiving end to read data from the cache to remove the corresponding member, so that the state of the cache data of the member to be deleted is not judged any more. And after deleting the member to be deleted from the Group by the Flexe SHIM, stopping the operation on the member to be deleted, and setting the member to be unused for the addition and calling of other groups.
In the above embodiment of the present application, when a member is deleted, the FlexE SHIM configures a FlexE time slot table, and configures the service of the member to be deleted into an empty service according to the position of the overhead byte OH of the member to be deleted, and removes the member to be deleted from the read condition of the cache data of the group to which the member to be deleted belongs, so that a manner that the member to be deleted no longer carries an effective service can be configured, and the member to be deleted can be directly discarded because the member to be deleted no longer carries effective service data, thereby ensuring that the dynamic deletion operation of the member is completed without affecting the service flow, and further better meeting the network transmission requirements.
In some embodiments, the method for dynamically processing members in FlexE further includes:
and configuring a group to which the member to be added belongs, wherein the configuration is used for indicating a sending end to insert OH according to the position of the service distributed to the member to be added and the set interval.
Here, when a member is dynamically added to the FlexE, before the receiving end determines the Group to which the member to be added belongs, the receiving end further configures the Group to which the member to be added belongs, and for example, the FlexE SHIM informs a Group that a new member is added through command configuration. Configuring a group to which a member to be added belongs through the Flexe SHIM, and inserting an OH identifier into a sending end at set intervals according to the position of a service distributed to the member to be added; and the receiving end determines the group to which the member to be added belongs according to the configuration, and writes the OH and subsequent data into a cache after finding the OH identification lock in the data corresponding to the member to be added, so as to execute the subsequent dynamic adding step by acquiring the cache data respectively corresponding to the member to be added and the original member in the group.
In order to further specifically understand the implementation flow of the member dynamic processing method in the FlexE provided in the embodiment of the present application, please refer to fig. 7, taking dynamic processing as dynamic addition as an example, wherein 4 PHY members (which may also be larger than or smaller than 4 members, or 4 members as an example) are bound in a Group in the FlexE, and the 4 PHY members bear 10 client ports (which may also be larger than 10 client ports or smaller than 10 client ports, or 10 as an example). All 4 PHY members operate at full traffic, and because of the transmission requirement of the communication network, in this Group, 1 client port needs to be added (or a plurality of client ports may also be added), or, the traffic of one or more ports in 10 client ports increases, and all 4 PHY members operate at full traffic, and no extra bandwidth bears the new client service, at this time, a dynamic processing method for the PHY members in the FlexE may be implemented in a manner of dynamically adding the PHY members, including the following steps:
step S11, configuring the group to which the member to be added belongs by Flexe SHIM; for example, Flexe SHIM informs the Group that the member to be added is about to join through command configuration;
in step S12, the initiator inserts an OH mark into the member to be added according to the standard of inserting an OH mark into every 1023 × 20 × 8 data blocks.
Step S13, the receiving end detects the OH of the member to be added, stores the OH and the subsequent data into a buffer area after the OH is detected, reads the data from the buffer area and counts the number of the read data; the receiving end detects OH of the original member, stores OH and subsequent data into a cache region after the OH is detected, reads the data from the cache region and counts the number of the read data;
step S14, the receiving end compares the data count of the member to be added read out from the buffer area with the data count of the original member to determine the relative position relationship between the member OH to be added and the original member OH;
if the member to be added OH lags behind the original member OH, executing step S15; if the member to be added OH is advanced relative to the original member OH, executing step S16; if the positions of the member to be added OH and the original member OH are the same, executing step S17;
step S15, reducing the data reading rate corresponding to the original member until the positions of the member OH to be increased and the original member OH are the same, and executing step S17;
step S16, reducing the data reading rate corresponding to the member to be increased until the position of the member to be increased OH is the same as that of the original member OH, and executing step S17;
step S17, the receiving end reads the cache data corresponding to the member to be added and the original member from the cache region according to the same data reading rate;
step S18, the FlexE SHIM distributes the client port service to the member to be added, and carries the added traffic.
Referring to fig. 8, taking dynamic processing as an example of dynamic addition, 4 PHY members (greater than or less than 4 members, or 4 members as an example) are bound in a Group in FlexE, and the 4 PHY members carry 10 client ports (greater than 10 client ports or less than 10 client ports, or 10 as an example). In the network transmission process, a certain PHY member is influenced by an interference source in the transmission process, so that the stability is poor, the error rate is high, and the PHY member is in a failure state. However, under the influence of interference, the data link skew of the affected PHY member may change, the relative position relationship between the affected PHY member and the OH of the other three members will change, and the position of the OH in the Group needs to be re-determined, and in the re-determination process, the member to be recovered may be added to the Group as the member to be added by dynamically adding the member, and the member dynamic processing method in the FlexE includes the following steps:
step S21, the receiving end determines that the member to be recovered has been recovered to be stable according to the link state indication, and determines the group to which the member to be recovered belongs;
step S22, the receiving end detects the OH of the member to be recovered, stores the OH and the subsequent data into a buffer area after the OH is detected, reads the data from the buffer area and counts the number of the read data; the receiving end detects OH of the original member, stores OH and subsequent data into a cache region after the OH is detected, reads the data from the cache region and counts the number of the read data;
step S23, the receiving end compares the data count of the member to be recovered read from the buffer area with the data count of the original member to determine the relative position relationship between the member to be recovered OH and the original member OH;
if the member to be recovered OH lags behind the original member OH, executing step S25; if the member to be recovered OH is advanced relative to the original member OH, performing step S26; if the position of the member OH to be recovered is the same as that of the original member OH, executing a step S27;
step S25, reducing the data reading rate corresponding to the original member until the position of the member to be recovered OH is the same as that of the original member OH, and executing step S27;
step S26, reducing the data reading rate corresponding to the member to be recovered until the position of the member to be recovered OH is the same as that of the original member OH, and executing step S27;
step S27, the receiving end reads the cache data corresponding to the member to be recovered and the original member from the cache region according to the same data reading rate;
step S28, the FlexE SHIM allocates client port service to the member to be recovered, and carries the service before failure.
In the above embodiment, the member to be recovered means a member to be added in the operation process of dynamically adding a member, and recovery of a failed member is completed without affecting other normal members in a Group.
Referring to fig. 9, taking dynamic processing as an example of dynamic deletion, 4 PHY members (may be greater than or less than 4 members, or 4 members as an example) are bound in a Group in FlexE, and the 4 PHY members carry 10 client ports (may be greater than 10 client ports or less than 10 client ports, or 10 as an example). In the network transmission process, the traffic of 10 client ports is reduced or some client ports no longer transmit data streams, and the total traffic is reduced, at this time, members in the Group can be removed in a proper amount according to the size of the reduced traffic, the removed members can be utilized by other groups, in the removal process, it is equivalent to delete one member in the Group, and the position of OH in the Group needs to be redetermined, in the redetermining process, by dynamically deleting the members, the member dynamic processing method in FlexE includes the following steps:
step S31, the Flexe SHIM configures the service on the member to be deleted as a null service; the service carried by the members in the Group is configured by a time slot table, and the time slot table allocates service data corresponding to each time slot from the position of OH. And configuring the service of the member needing to be deleted into an empty service, not bearing an effective service, and only keeping the OH mark.
Step S32, the Flexe SHIM eliminates the member to be deleted from the reading condition of reading the data from the cache; the member to be deleted is removed from the reading condition of reading data in the cache, the cache state of the member is not judged, the deleted member does not bear services any more, the data can be directly discarded without influencing service flow and other members, and finally the lossless deletion of the PHY member is completed.
According to the method for dynamically processing the members in the Flexe, provided by the embodiment of the application, when the client port service is added or deleted in the network transmission, the dynamic adding and deleting operations of the members in the Group can be realized on the premise of not influencing the normal work of other client ports, so that the size of the bandwidth can be dynamically adjusted in a lossless manner, and the flexibility of the network transmission is improved.
Referring to fig. 10, another aspect of the present invention provides a receiving end, including a processor and a memory for storing a computer program capable of running on the processor; when the processor is used for running the computer program, the dynamic member processing method applied to the FlexE at the receiving end provided by the embodiment of the application is executed. For the sake of convenience of description, the memory of the receiving end is referred to as a first memory 503, and the processor of the receiving end is referred to as a first processor 501, and when the first processor 501 runs the computer program, the first processor performs: a dynamic member processing method in a flexible Ethernet Flexe comprises the following steps: according to a group to which a member to be added belongs, acquiring positions of head overhead bytes OH of cache data respectively corresponding to the member to be added and an original member in the group, and determining a relative position relation between the OH position of the member to be added and the OH position of the original member; and when the relative position relation is determined not to meet the set condition, adjusting the reading rate of the member to be added or the original member.
The first processor 501 is configured to, when running the computer program, perform: and when the relative position relation is determined to meet the setting condition, the member to be added and the original member realize data synchronization.
The first processor 501 is configured to, when running the computer program, perform: when it is determined that the relative position relationship does not meet the set condition, adjusting the reading rate of the member to be added or the original member, including: when the relative position relation is determined not to meet the set condition, adjusting the reading rate of the member to be added or the original member, and re-determining the relative position relation between the OH position of the member to be added and the OH position of the original member; and when the relative position relation still does not meet the setting condition, returning to the step of adjusting the reading rate of the member to be added or the original member until the relative position relation meets the setting condition.
The first processor 501 is configured to, when running the computer program, perform: the adjusting the reading rate of the member to be added or the original member includes: when the OH position of the member to be added is determined to be advanced relative to the OH position of the original member, reducing the reading rate of the member to be added; or when the OH position of the member to be increased is determined to be lagged relative to the OH position of the original member, the reading rate of the original member is reduced.
The first processor 501 is configured to, when running the computer program, perform: the reducing the reading rate of the members to be added comprises: reducing the reading rate of the member to be increased to one half of the current rate; the reducing the reading rate of the original member comprises: and reducing the reading rate of the original member to be one half of the current rate.
The first processor 501 is configured to, when running the computer program, perform: the obtaining the positions of the head overhead bytes OH of the cache data respectively corresponding to the member to be added and the original member in the group, and determining the relative position relationship between the OH position of the member to be added and the OH position of the original member, includes: reading the cache data corresponding to the member to be added according to a set first reading rate, carrying out first counting on the read data, and resetting the first counting when the position of OH of the data is read; reading the cache data corresponding to the original member in the group according to a set second reading rate, carrying out second counting on the read data, and resetting the second counting when the position of OH of the data is read; and determining the relative position relationship between the OH position of the member to be increased and the OH position of the original member according to the first counting and the second counting.
The first processor 501 is configured to, when running the computer program, perform: determining the relative position relationship between the OH position of the member to be added and the OH position of the original member according to the first counting and the second counting, wherein the determining comprises the following steps: when the first count is zero and the second count is greater than a threshold value, determining that the position of OH of the member to be added is advanced relative to the original member; and when the first count is zero and the second count is less than or equal to the threshold, determining that the position of the OH of the member to be added lags behind the original member.
Referring to fig. 11, another aspect of the embodiments of the present application further provides a FlexE SHIM, which includes a processor and a memory for storing a computer program capable of running on the processor; when the processor is used for running the computer program, the dynamic member processing method applied to the FlexE SHIM provided by the embodiment of the application is executed. For the convenience of description of differences, the memory of the FlexE SHIM is referred to as a second memory 603, and the processor of the FlexE SHIM is referred to as a second processor 601, where the second processor 601 is configured to execute, when running the computer program: a dynamic member processing method in a flexible Ethernet Flexe comprises the following steps: configuring a Flexe time slot table, and configuring the service of the member to be deleted into a null service according to the position of a head overhead byte OH of the member to be deleted; and eliminating the members to be deleted from the reading condition of the cache data of the group to which the members to be deleted belong.
The second processor 601 is configured to, when running the computer program, perform: and configuring a group to which the member to be added belongs, wherein the configuration is used for indicating a sending end to insert OH according to the position of the service distributed to the member to be added and the set interval.
The receiving end provided in the above embodiment may allocate steps in the corresponding method to different program modules according to requirements when performing dynamic addition of members in the FlexE and dynamic deletion of members in the FlexE SHIM, that is, may divide the internal structure of the virtual device corresponding to the computer program stored in the corresponding memory into a plurality of program modules to complete all or part of the above-described processing. The receiving end, the FlexE SHIM and the corresponding embodiment of the method for dynamically processing the members in the FlexE belong to the same concept, and the specific implementation process is described in detail in the embodiment of the method and will not be described herein again.
Referring to fig. 12, in another aspect, the embodiment of the present invention further provides a system for dynamically processing members in FlexE, including a FlexE SHIM11, a sending end 12 and a receiving end 13, where the FlexE SHIM11 is used to configure a group to which a member to be added belongs; the sending end 12 is configured to insert a header overhead byte OH at a set interval according to the position of the service allocated to the member to be added; the receiving end 13 is configured to execute the member dynamic processing method in the FlexE applied to the receiving end according to the embodiment of the present application, and the receiving end may be the receiving end shown in fig. 10.
The FlexE SHIM11 is further configured to configure the FlexE time slot table, and configure the service of the member to be deleted as an empty service according to the position of the head overhead byte OH of the member to be deleted; and eliminating the members to be deleted from the reading condition of the cache data of the group to which the members to be deleted belong.
The FlexE SHIM11 is further configured to allocate a service port to the member to be added when the member to be added and the original member in the group to which the member belongs implement data synchronization.
In another aspect, the embodiments of the present application further provide a storage medium, for example, including a memory storing a computer program, where the computer program is executable by a processor to perform the steps of the member dynamic processing method in FlexE provided in any embodiment of the present application. The computer storage medium can be FRAM, ROM, PROM, EPROM, EEPROM, Flash Memory, magnetic surface Memory, optical disk, or CD-ROM; or may be various devices including one or any combination of the above memories.
The above description is only for the specific embodiments of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present invention, and all the changes or substitutions should be covered within the scope of the present invention. The scope of the invention is to be determined by the scope of the appended claims.

Claims (15)

1. A dynamic member processing method in a flexible Ethernet Flexe is characterized by comprising the following steps:
according to a group to which a member to be added belongs, acquiring positions of head overhead bytes OH of cache data respectively corresponding to the member to be added and an original member in the group, and determining a relative position relation between the OH position of the member to be added and the OH position of the original member;
and when the relative position relation is determined not to meet the set condition, adjusting the reading rate of the member to be added or the original member.
2. The method of claim 1, further comprising:
and when the relative position relation is determined to meet the setting condition, the member to be added and the original member realize data synchronization.
3. The method of claim 1, wherein when it is determined that the relative position relationship does not satisfy a setting condition, adjusting the reading rate of the member to be added or the original member includes:
when the relative position relation is determined not to meet the set condition, adjusting the reading rate of the member to be added or the original member, and re-determining the relative position relation between the OH position of the member to be added and the OH position of the original member;
and when the relative position relation still does not meet the setting condition, returning to the step of adjusting the reading rate of the member to be added or the original member until the relative position relation meets the setting condition.
4. The method of claim 1, wherein the adjusting the read rate of the to-be-added member or the original member comprises:
when the OH position of the member to be added is determined to be advanced relative to the OH position of the original member, reducing the reading rate of the member to be added; or when the OH position of the member to be increased is determined to be lagged relative to the OH position of the original member, the reading rate of the original member is reduced.
5. The method of claim 4, wherein the reducing the read rate of the member to be added comprises: reducing the reading rate of the member to be increased to one half of the current rate;
the reducing of the reading rate of the original member comprises: and reducing the reading rate of the original member to be one half of the current rate.
6. The method of claim 1, wherein the obtaining the positions of the overhead bytes OH of the cache data corresponding to the member to be added and the original member in the group, respectively, and determining the relative position relationship between the OH position of the member to be added and the OH position of the original member comprises:
reading the cache data corresponding to the member to be added according to a set first reading rate, carrying out first counting on the read data, and resetting the first counting when the position of OH of the data is read;
reading the cache data corresponding to the original member in the group according to a set second reading rate, carrying out second counting on the read data, and resetting the second counting when the position of OH of the data is read;
and determining the relative position relationship between the OH position of the member to be increased and the OH position of the original member according to the first counting and the second counting.
7. The method of claim 6, wherein the determining the relative position relationship between the OH position of the member to be added and the OH position of the original member according to the sizes of the first count and the second count comprises:
when the first count is zero and the second count is greater than a threshold value, determining that the position of OH of the member to be added is ahead relative to the original member;
and when the first count is zero and the second count is less than or equal to the threshold, determining that the position of the OH of the member to be added lags behind the original member.
8. A receiving end comprising a processor and a memory for storing a computer program capable of running on the processor; wherein,
the processor is configured to execute the member dynamic processing method in the flexible ethernet FlexE according to any of the claims 1 to 7 when running the computer program.
9. A dynamic member processing method in a flexible Ethernet Flexe is characterized by comprising the following steps:
configuring a Flexe time slot table, and configuring the service of the member to be deleted into a null service according to the position of a head overhead byte OH of the member to be deleted;
and eliminating the members to be deleted from the reading condition of the cache data of the group to which the members to be deleted belong.
10. The method of claim 9, further comprising:
and configuring a group to which the member to be added belongs, wherein the configuration is used for indicating a sending end to insert OH according to the position of the service distributed to the member to be added and the set interval.
11. A flexible ethernet management entity, FlexE SHIM, comprising a processor and a memory for storing a computer program capable of running on the processor; wherein,
the processor is configured to execute the member dynamic processing method in the flexible ethernet FlexE according to claim 9 or 10 when running the computer program.
12. A dynamic processing system for members in flexible Ethernet Flexe comprises a flexible Ethernet management entity Flexe SHIM, a sending end and a receiving end, and is characterized in that,
the Flexe SHIM is used for configuring the group to which the member to be added belongs;
the sending end is used for inserting a head overhead byte OH according to the position of the service distributed to the member to be added and the set interval;
the receiving end, configured to execute the member dynamic processing method in the flexible ethernet FlexE according to any one of claims 1 to 7.
13. The system according to claim 12, wherein the FlexE SHIM is further configured to configure the FlexE slot table, and configure the service of the member to be deleted as a null service according to the position of the header overhead byte OH of the member to be deleted; and eliminating the members to be deleted from the reading condition of the cache data of the group to which the members to be deleted belong.
14. The system according to claim 12, wherein the FlexE SHIM is further configured to allocate a service port to the member to be added when the member to be added and the original member in the group to which the member belongs implement data synchronization.
15. A storage medium, characterized in that the storage medium has stored therein executable instructions which, when executed by a processor, implement the method for dynamic handling of members in a flexible ethernet FlexE according to any one of claims 1 to 7, or which, when executed by a processor, implement the method for dynamic handling of members in a flexible ethernet FlexE according to claim 9 or 10.
CN201811548878.3A 2018-12-18 2018-12-18 Dynamic member processing method, system, receiving terminal, management entity and storage medium Active CN111342983B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201811548878.3A CN111342983B (en) 2018-12-18 2018-12-18 Dynamic member processing method, system, receiving terminal, management entity and storage medium
PCT/CN2019/109968 WO2020125154A1 (en) 2018-12-18 2019-10-08 Member dynamic processing method, system, receiving end, management entity, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811548878.3A CN111342983B (en) 2018-12-18 2018-12-18 Dynamic member processing method, system, receiving terminal, management entity and storage medium

Publications (2)

Publication Number Publication Date
CN111342983A CN111342983A (en) 2020-06-26
CN111342983B true CN111342983B (en) 2022-07-12

Family

ID=71102922

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811548878.3A Active CN111342983B (en) 2018-12-18 2018-12-18 Dynamic member processing method, system, receiving terminal, management entity and storage medium

Country Status (2)

Country Link
CN (1) CN111342983B (en)
WO (1) WO2020125154A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114726733B (en) * 2022-03-16 2024-04-26 轻舟微电子(杭州)有限公司 Method for realizing lossless adding and deleting FlexE members based on ram

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106612203A (en) * 2015-10-27 2017-05-03 中兴通讯股份有限公司 Method and apparatus for processing data flow of flexible Ethernet client
WO2018059446A1 (en) * 2016-09-29 2018-04-05 中兴通讯股份有限公司 Service data transmission method, device and system, and storage medium
CN108809674A (en) * 2017-04-28 2018-11-13 华为技术有限公司 The method and apparatus of configuration link group

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6163741B2 (en) * 2012-11-30 2017-07-19 ヤマハ株式会社 Method and apparatus for determining buffer size of network reception buffer and network session system
US9800361B2 (en) * 2015-06-30 2017-10-24 Ciena Corporation Flexible ethernet switching systems and methods
CN107204941A (en) * 2016-03-18 2017-09-26 中兴通讯股份有限公司 The method and apparatus that a kind of flexible Ethernet path is set up
CN112165427A (en) * 2016-12-02 2021-01-01 华为技术有限公司 Protection switching method, network equipment and system
CN106850465B (en) * 2016-12-27 2019-10-25 深圳市海思半导体有限公司 A kind of Flex E method for interchanging data and switching equipment
CN108631908B (en) * 2017-03-21 2022-03-01 中兴通讯股份有限公司 Method for carrying signal frame by using Flexe, method and device for restoring Flexe signal frame
CN108988977B (en) * 2017-05-31 2021-06-08 中兴通讯股份有限公司 Method, device and system for transmitting service flow in flexible Ethernet protocol

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106612203A (en) * 2015-10-27 2017-05-03 中兴通讯股份有限公司 Method and apparatus for processing data flow of flexible Ethernet client
WO2018059446A1 (en) * 2016-09-29 2018-04-05 中兴通讯股份有限公司 Service data transmission method, device and system, and storage medium
CN108809674A (en) * 2017-04-28 2018-11-13 华为技术有限公司 The method and apparatus of configuration link group

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
张源斌等.5G传送标准进展.《中兴通讯技术》.2018,(第01期), *

Also Published As

Publication number Publication date
WO2020125154A1 (en) 2020-06-25
CN111342983A (en) 2020-06-26

Similar Documents

Publication Publication Date Title
US11469844B2 (en) Method and apparatus for transmitting service flow based on flexible ethernet FlexE
CN110650002B (en) Method for adjusting PHY in Flexe group, related equipment and storage medium
KR102339054B1 (en) Flexible ethernet-based service flow transmission method and apparatus, and communication system
US11552721B2 (en) Clock synchronization method and apparatus
CN113300810B (en) Transmission rate adjusting method and network equipment
US11271668B2 (en) Data transmission methods, apparatuses, devices, and system
WO2019042124A1 (en) Communication method and device, and storage medium
WO2022017509A1 (en) Bandwidth adjustment method, service transmission method, network device and readable storage medium
US11838181B2 (en) Flexible ethernet group management method, device, and computer-readable storage medium
US20230388984A1 (en) Communication Method and Device, and Chip System
CN111342983B (en) Dynamic member processing method, system, receiving terminal, management entity and storage medium
CN102377678B (en) Data transmission and processing method and device
JP7027449B2 (en) Methods, equipment and systems for transmitting traffic in the Flex Ethernet protocol
WO2011026508A1 (en) Method for distributing data traffic over radio links
CN116057865B (en) Method and equipment for adjusting physical interface in flexible Ethernet group
EP2798790B1 (en) Compression method for tdm frames in a packet network
WO2023005738A1 (en) Flexible ethernet time slot negotiation method, receive end, transmit end, and storage medium
US9350563B2 (en) Procedure, apparatus, and computer program for reducing a probability of fragmentation when supporting virtual concatenation (VCAT) services

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant