WO2014013303A1 - Network service testing - Google Patents

Network service testing Download PDF

Info

Publication number
WO2014013303A1
WO2014013303A1 PCT/IB2013/001237 IB2013001237W WO2014013303A1 WO 2014013303 A1 WO2014013303 A1 WO 2014013303A1 IB 2013001237 W IB2013001237 W IB 2013001237W WO 2014013303 A1 WO2014013303 A1 WO 2014013303A1
Authority
WO
WIPO (PCT)
Prior art keywords
test
frames
flow
frame
network element
Prior art date
Application number
PCT/IB2013/001237
Other languages
French (fr)
Inventor
Xiaohua Ma
Original Assignee
Alcatel Lucent
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alcatel Lucent filed Critical Alcatel Lucent
Publication of WO2014013303A1 publication Critical patent/WO2014013303A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L43/00Arrangements for monitoring or testing data switching networks
    • H04L43/10Active monitoring, e.g. heartbeat, ping or trace-route
    • H04L43/106Active monitoring, e.g. heartbeat, ping or trace-route using time related information in packets, e.g. by adding timestamps
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L43/00Arrangements for monitoring or testing data switching networks
    • H04L43/50Testing arrangements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L43/00Arrangements for monitoring or testing data switching networks
    • H04L43/50Testing arrangements
    • H04L43/55Testing of service level quality, e.g. simulating service usage
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/50Network service management, e.g. ensuring proper service fulfilment according to agreements
    • H04L41/5003Managing SLA; Interaction between SLA and QoS
    • H04L41/5009Determining service level performance parameters or violations of service level contracts, e.g. violations of agreed response time or mean time between failures [MTBF]
    • H04L41/5012Determining service level performance parameters or violations of service level contracts, e.g. violations of agreed response time or mean time between failures [MTBF] determining service availability, e.g. which services are available at a certain point in time
    • H04L41/5016Determining service level performance parameters or violations of service level contracts, e.g. violations of agreed response time or mean time between failures [MTBF] determining service availability, e.g. which services are available at a certain point in time based on statistics of service availability, e.g. in percentage or over a given time
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L43/00Arrangements for monitoring or testing data switching networks
    • H04L43/08Monitoring or testing based on specific metrics, e.g. QoS, energy consumption or environmental parameters
    • H04L43/0823Errors, e.g. transmission errors
    • H04L43/0829Packet loss
    • H04L43/0835One way packet loss
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L43/00Arrangements for monitoring or testing data switching networks
    • H04L43/08Monitoring or testing based on specific metrics, e.g. QoS, energy consumption or environmental parameters
    • H04L43/0852Delays
    • H04L43/0858One way delays
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L43/00Arrangements for monitoring or testing data switching networks
    • H04L43/08Monitoring or testing based on specific metrics, e.g. QoS, energy consumption or environmental parameters
    • H04L43/0876Network utilisation, e.g. volume of load or congestion level
    • H04L43/0894Packet rate

Definitions

  • the disclosure relates to the field of network tests.
  • Y.1731 TST can do one-way Service Performance test, but is unable to verify Service Configuration (CIR/CBS/PIR/PBS and QoS priority) and simulate IP packets or other upper layer packets.
  • Service Configuration CIR/CBS/PIR/PBS and QoS priority
  • lag LB has the same problem as Y.1731 TST, and is able to test only round-trip test but not one-way test. In case the transport path has asymmetric performance, unidirectional performance can not be derived accurately from round-trip test result .
  • JDSU proprietary Loopback protocol with EVC loopback on remote NE is able to test only round-trip parameters and simulate only Layer 2 frames.
  • EXFO Y.1564 is the best existing solution, but it is a very expensive test instrument, and requires additional
  • Synchronization mechanism out-of-band signaling, and complex algorithm .
  • the NE Before transmitting the user traffic, the NE carries out a QoS processing to process the incoming user traffic, including policing (CIR/CBS/PIR/PBS) -> congestion control -> queuing -> scheduling -> shaping.
  • policing CIR/CBS/PIR/PBS
  • Some of the present test methods, such as 802. lag LB/Y.1731 tests are based on Ethernet OAM (Operations, Administration and Maintenance) functions.
  • OAM functions is used for inserting OAM packets or extract OAM packets within the QoS processing in the NE, namely after the policing ( C IR/CBS /P IR/PBS ) processing and before the congestion control processing.
  • a new test method is needed to meet the requirement of Y.1564.
  • a method in a first network element for executing tests comprising the steps of :
  • test traffics - generating test traffics, and providing them for the QoS processing, before sending it to a second network element for executing tests;
  • the generating step comprises:
  • the test traffic is injected before the QoS processing thus is identical with the real user traffic, therefore the Service Configuration (CIR/CBS/PIR/PBS and QoS priority) can be tested, and all Y.1564 requirements can be covered.
  • the solution is carried out in network element (NE) without using external test equipments, and the operators does not need to load test engineers on sites and use high cost test instruments.
  • the method further comprises the step before the generating step:
  • Said generating step generates the test frames according to the test configuration
  • the QoS processing comprises the following operations:
  • NMS network management system
  • test configuration comprises any of the following:
  • the generating step further adds any of the following information into the produced test frames:
  • test ID identifying the flow of the test traffic
  • the length of the test frame is no more than 64 bytes.
  • the embodiment provides the frame format for the test frame.
  • the method further comprises the step before the generating step:
  • the test function can make use of NE- self 's synchronization mechanism (IEEE1588 or SyncE, etc) to achieve accurate one-way performance test results.
  • the generating step generates the test frames according to the committed information rate of the flow, wherein,
  • the one or more flows are provided with tokens based on the test committed information rate of the flow
  • the scheduling step selects one flow from the one or more flows according to scheduling algorithms, and determining whether the selected flow has enough tokens: - if it has, the test frames of the selected flow are produced ;
  • the scheduling step selects the next flow according to the scheduling algorithms.
  • a method in a second network element for executing tests comprising the steps of :
  • the detecting step comprises:
  • the test traffics which have been transported across the network are trapped out after the QoS processing in the receiving party, thus the test traffics are identical with the real user traffics, therefore the Service Configuration (CIR/CBS/PIR/PBS and QoS priority) can be tested, and all Y.1564 requirements can be covered.
  • the solution is carried out in network element (NE) without using external test equipments, and the operators does not need to load test engineers on sites and use high cost test instruments.
  • the calculating step comprises the steps :
  • the embodiment provides solutions for computing the result of the test.
  • the calculating step further comprises the steps:
  • the embodiment provides solutions for computing the result of the test.
  • a test generator which generates test traffic and provides it for the QoS processing before sending it to a second network element for executing tests, the test generator comprising:
  • Tx timestamp updater for adding Tx timestamp into the produced test frames .
  • the unit comprises:
  • test frame generator for producing the test frames of the flow
  • the scheduler selects one flow from the one or more flows according to scheduling algorithms, and being informed by the corresponding shaper about whether the shaper has enough tokens :
  • the scheduler instructs the test frame generator to produce test frames of the flow
  • the scheduling step selects the next flow according to the scheduling algorithms.
  • the preferable embodiment provides the modular structure of a general QoS scheduling mechanism for generating the test frames according to the information rate, to remove the expensive timers. In this way, the design can achieve high accuracy and low cost.
  • a second network element for executing test receives test traffics sent by a first network element for executing tests, comprising a test checker for detecting test traffic to which the QoS processing has been carried out;
  • test checker comprises:
  • one or more units respectively for each flow, for calculating the result of the test based on the received test frames of the flow.
  • FIG. 1 shows the modules of the test generator according to a preferred embodiment of the invention
  • Fig. 2 shows the modules of the test checker according to a preferred embodiment of the invention
  • Fig. 3 shows the schematic view of unavailable duration
  • Fig. 4 shows the test frame formats according to a preferred embodiment of the invention.
  • the invention proposes a method in a first network element for executing tests, comprising the steps of:
  • test traffics - generating test traffics, and providing them for the QoS processing, before sending it to a second network element for executing tests;
  • the generating step comprises:
  • the invention also proposes a method in a second network element for executing tests, comprising the steps of :
  • the detecting step comprises: - adding Rx timestamp to the received test frames;
  • the first network element and the second network element receives test configuration from a network management system (NMS) .
  • NMS network management system
  • all NEs are under the management of NMS, it does not need to use any additional control protocol to notify the local configurations to the remote test functions since the operator can just use NMS to do configuration on both end NEs.
  • the test configuration includes:
  • the test configuration includes:
  • the first network element and the second network element synchronize the time and frequency with respect to each other by using their own synchronization mechanism. It can make use IEEE 1588 or SyncE, etc to achieve accurate one-way performance test results.
  • first and second are used for only differentiating the test traffic originator and the test traffic receiver, while in the practical implementation, a single network element can be the test traffic originator for one test and also the test traffic receiver for another test.
  • the first network element generates test traffics, and provides them for the QoS processing, before sending it to the second network element.
  • the first network element comprises a test generator for this operation.
  • the test generator generates test flows to simulate the user traffic encapsulated in various transport layers (e.g. Ethernet or IP layer networks) and injects the test flow into the ingress of UNI-Ns.
  • test generator comprises:
  • scheduler 12 for scheduling the one or more units to produce test frames of the flow
  • Tx timestamp updater 14 for adding Tx timestamp into the produced test frames .
  • each unit 10 comprises:
  • a shaper 100 for maintaining tokens based on the test committed information rate of the flow
  • a test frame generator 102 for producing the test frames of the flow.
  • each set of test frame generator [i] 102 and shaper[i] 100 corresponding to one test flow i.
  • the challenge in designing Y.1564 test generators is how to keep each test flow generated evenly and strictly at required information rate when multiple test flows are simultaneously generated and input to the same UNI.
  • the proposed design is a simple way to implement multiple test generators to meet above requirements.
  • the shaper 100 is a token bucket generating token and controlling the generated test flow in conformance with the configured information rate.
  • the scheduler 12 executes a scheduling algorithm (e.g. WRR, WFQ) to select one flow, and inquires the shaper 100 corresponding to this flow whether it has enough token.
  • the shaper 100 informs the scheduler 12 that it has enough tokens to allow next generated test frame to pass, then the scheduler 12 controls the test fame generator to generate a test frame and put onto the line.
  • the test frame generator 102 is simple to generate test frames in configured frame size, under the control of the scheduler 12, but does not need a complex algorithm to compute a timer to generate each frame .
  • the Tx TimeStamp Updater 14 adds Tx TimeStamp into the test frames at line rate, as closely to the instant test frames are actually sent to UNI.
  • the QoS processing is carried out to the test frames.
  • the QoS processing comprises policing (CIR/CBS/PIR/PBS) -> congestion control -> queuing -> scheduling -> shaping.
  • test frames are then transmitted on the network to the second network element.
  • the second network element receives the test traffics sent by the first network element, and carried out QoS processing on the received traffic.
  • the QoS processing comprises the following operat ions :
  • the test traffic are detected to obtain the result of the test.
  • the second network element comprises a test checker for detecting test traffics, and the test checker comprises:
  • classifier 22 for classifying the received test frames into one or more flows of test traffics
  • each unit 24 comprises:
  • each set of Rx counter [i] 240 and computing unit[i] 242 corresponds to one test flow.
  • the Rx TimeStamp updater 20 is placed at the ingress to add Rx TimeStamp at line rate before this packet is parsed.
  • the classifier 22 identifies each test flow from all egress UNI traffic based on user-configured test flow definitions (e.g. MAC DA, MAC SA, VLAN ID, IP DSCP, etc) .
  • the Rx Counter 240 counts the number of received test frames per test flow.
  • the computing unit 242 calculates the test result per test flow based on Tx Time Stamp, Rx Time Stamp, Rx Counter, and the user configured test parameters per test flow (e.g. test flow IR, test duration), where the test result computation method is proposed and detailed as follows.
  • TimeOut is the time to wait the last valid before stop the test for a specific Test ID (test flow) .
  • the information rate is calculated according to the length of the received test frames and the duration.
  • Average information rate of the received test traffic over Test Duration starting from the instant the first test frame is rece ived .
  • the frame loss rate is calculated according to the number of the received test frames and the number of transmitted test frames. Wherein, the number of transmitted test frames can be informed by the NMS in the test configuration.
  • the total FLR is the Ratio of total lost Ethernet frame outcomes to total transmitted Ethernet Frames
  • FTD Frame Transfer Delay
  • X _ FDV max ⁇ FTD i+l - FTD ⁇
  • M _FDV imn ⁇ FTD i+l - FTD t ⁇
  • AVAIL is the percentage of the service time, and it is calculated according to the test duration starting from the time when the first test frame is received and the unavailable duration in which the frame loss rate is above a threshold for a successive period.
  • the S E S E TH is a severe errored second outcome occurs for a block of frames observed during a one-second interval when the corresponding FLR (i.e. the ratio of lost frames to total frames in the block) exceeds si.
  • a provisioned value si of 0.5 is used, and different values may also be chosen depending on the class of service (CoS) .
  • the link is considered as being unavailable, and the unavailable duration starts from the first S E S E TH of the successive S E S ETH S .
  • the unavailable duration ends before the first non- S E S E iH followed by a certain amount (10 eg.) of successive non-SES E iHS.
  • AVAIL is the percentage of total scheduled Ethernet Service time that is categorized as available.
  • TestDuration (starting from T first received fmme )
  • test frame Since the size of the Ethernet frame is no less than 64 byte, while the test frame should be further added with padding to simulate any Ethernet frame, therefore the test frame (without padding) preferably does not excess 64 bytes. Therefore, it is better to simplify the test frame format and shorten the test frame length. Since some test information such as test duration, test pattern, test Tx information rate is specific to each test flow, and these information can be informed by the NMS in test configuration, only the test ID identifying the test flow is included in test frame to represent all these information. Besides, from the above computation formula, we can see test frame size, Tx TimeStamp and Sequence Number (for detecting out- of-order) are specific to each test frame and should be carried in each frame.
  • test frame format is shown in figure 4.
  • DA destination address
  • SA source address
  • FCS frame check sequence. Therefore, the total test frame size is less than 64 bytes, with or without IP header
  • the following table illustrates the comparison between the proposed solution and other three existing solutions (one-way on Test instrument -best existing solution, Round-tr ip/Loopback with EVC loopback on one-end NE , 802. lagLB/Y .1731 TST) . Based on this comparison, we can see the method proposed in invention is a simple and low cost solution which can meet Y.1564 requirements and has the accuracy approximate to Test Instrument

Abstract

Current network test methods based on OAM is unable to verify Service Configuration (CIR/CBS/PIR/PBS and QoS priority), and current external test instruments cost much. The invention proposes methods in network elements for executing tests and corresponding network elements. The first network element generates test traffics, and providing them for the QoS processing, before sending it to a second network element for executing tests; wherein, the generating step comprises: scheduling one or more flows of test traffics; producing test frames for the scheduled flow; and adding Tx timestamp into the produced test frames. The second network element detects the test traffic to which the QoS processing has been carried out.

Description

Methods in network elements for executing tests and
corresponding network elements
Technical field
The disclosure relates to the field of network tests.
Background art
With more and more operators' attention to qualify the network before turn-up a service, Service Activation test is becoming an important and hot technology. During past years, IETF RFC2544 has been the most widely used Ethernet service test methodology because it was the only existing standard in this area. But RFC2544 does not include all required measurements such as packet jitter, QoS measurement and multiple concurrent service levels. To resolve these issues, ITU-T has introduced Y.1564, which can simulate all types of services that will run on the network and simultaneously qualify all key SLA parameters for each of these services, and also validate QoS mechanisms provisioned in the network to prioritize the different service types, resulting in more accurate validation and much faster deployment and troubleshooting. Now more and more service providers are interested in simplifying service activation procedure by using internal CPE test functions, instead of lying on high cost external Test Instruments. Although several CPE vendors have implemented RFC2544 in their NEs, till now no CPE vendors implement new standard Y.1564. Therefore, Y.1564 built in CPE or other NEs will become a key differentiator competing in Carrier Ethernet Market. Summary of the invention
The following are some known methods:
1. Y.1731 TST.
2. 802. lag LB .
3. JDSU proprietary Loopback protocol
4. EXFO Y.1564.
Y.1731 TST can do one-way Service Performance test, but is unable to verify Service Configuration (CIR/CBS/PIR/PBS and QoS priority) and simulate IP packets or other upper layer packets.
802. lag LB has the same problem as Y.1731 TST, and is able to test only round-trip test but not one-way test. In case the transport path has asymmetric performance, unidirectional performance can not be derived accurately from round-trip test result .
JDSU proprietary Loopback protocol with EVC loopback on remote NE is able to test only round-trip parameters and simulate only Layer 2 frames.
EXFO Y.1564 is the best existing solution, but it is a very expensive test instrument, and requires additional
Synchronization mechanism, out-of-band signaling, and complex algorithm .
Before transmitting the user traffic, the NE carries out a QoS processing to process the incoming user traffic, including policing (CIR/CBS/PIR/PBS) -> congestion control -> queuing -> scheduling -> shaping. Some of the present test methods, such as 802. lag LB/Y.1731 tests are based on Ethernet OAM (Operations, Administration and Maintenance) functions. OAM functions is used for inserting OAM packets or extract OAM packets within the QoS processing in the NE, namely after the policing ( C IR/CBS /P IR/PBS ) processing and before the congestion control processing.
It can be seen that, OAM functions are located after policing and before congestion control, therefore the OAM packets lacks the policing processing and is not exact same as the real user traffic. This is the reason why the OAM-based tests are unable to verify Service Configuration (CIR/CBS/PIR/PBS and QoS priority). Whereas, Y.1564 has the purpose to validate the Service Configuration, including traffic classification and CIR/PIR/CBS/PBS and QoS parameters such as priority level. Therefore a new test method is needed to meet the requirement of Y.1564. According to one aspect of the invention that is in the transmitting party of test traffics, it is provided a method in a first network element for executing tests, comprising the steps of :
- generating test traffics, and providing them for the QoS processing, before sending it to a second network element for executing tests;
Wherein, the generating step comprises:
- scheduling one or more flows of test traffics;
- producing test frames for the scheduled flow;
- adding Tx timestamp into the produced test frames.
According to this aspect, the test traffic is injected before the QoS processing thus is identical with the real user traffic, therefore the Service Configuration (CIR/CBS/PIR/PBS and QoS priority) can be tested, and all Y.1564 requirements can be covered. Besides, the solution is carried out in network element (NE) without using external test equipments, and the operators does not need to load test engineers on sites and use high cost test instruments.
In a preferred embodiment, the method further comprises the step before the generating step:
- receiving test configuration from a network management system;
Said generating step generates the test frames according to the test configuration;
the QoS processing comprises the following operations:
policing, congestion control, queuing, scheduling and shaping
In this embodiment, because NE is managed by the network management system (NMS) which knows all the test configuration info done by the operator, the NMS notifies the configuration conveniently, and the two testing NEs do not need additional control protocol to exchange the configuration. The operators only needs to click on the NMS GUI to accomplish the test.
In a preferred embodiment, the test configuration comprises any of the following:
- number of services to be tested and corresponding test ID;
- test steps and duration for each step;
- size of the test frame;
- committed information rate; - pattern of the test frame;
- Ethernet frame header;
the generating step further adds any of the following information into the produced test frames:
- Ethernet frame header;
- test ID identifying the flow of the test traffic;
- sequence number of the test traffic in the flow;
- the length of the test frame is no more than 64 bytes.
The embodiment provides the frame format for the test frame.
In a preferred embodiment, the method further comprises the step before the generating step:
- synchronizing the time and frequency with respect to the second network element by using their own synchronization mechanism.
In this embodiment, the test function can make use of NE- self 's synchronization mechanism (IEEE1588 or SyncE, etc) to achieve accurate one-way performance test results. In a preferred embodiment, the generating step generates the test frames according to the committed information rate of the flow, wherein,
the one or more flows are provided with tokens based on the test committed information rate of the flow,
the scheduling step selects one flow from the one or more flows according to scheduling algorithms, and determining whether the selected flow has enough tokens: - if it has, the test frames of the selected flow are produced ;
- otherwise, the scheduling step selects the next flow according to the scheduling algorithms.
In the art, external test instruments have very precise clocks and plenty of accurate timers to generate the test frames The timers are expensive. The embodiment of the invention uses a general QoS scheduling mechanism to remove the expensive timers. In this way, the design can achieve high accuracy and low cost.
According to another aspect of the invention that is the receiving party of the test traffic, it is provided a method in a second network element for executing tests, comprising the steps of :
- receiving test traffics sent by a first network element for executing tests;
- detecting the test traffic to which the QoS processing has been carried out;
Wherein, the detecting step comprises:
- adding Rx timestamp to the received test frames;
- classifying the received test frames into one or more flows of test traffics;
- calculating the result of the test based on the received test frames, respectively for each flow.
According to this aspect, the test traffics which have been transported across the network are trapped out after the QoS processing in the receiving party, thus the test traffics are identical with the real user traffics, therefore the Service Configuration (CIR/CBS/PIR/PBS and QoS priority) can be tested, and all Y.1564 requirements can be covered. Besides, the solution is carried out in network element (NE) without using external test equipments, and the operators does not need to load test engineers on sites and use high cost test instruments.
In a preferred embodiment, the calculating step comprises the steps :
- counting the number of the received test frames;
- computing the information rate, according to the length of the received test frames and the test duration starting from the time when the first test frame is received;
- computing the frame loss rate according to the number of the received test frames and the number of transmitted test frames ;
- computing the percentage of the service time based on the test duration starting from the time when the first test frame is received and the unavailable duration in which the frame loss rate is above a threshold for a successive period.
The embodiment provides solutions for computing the result of the test.
In a preferred embodiment, the calculating step further comprises the steps:
- computing the frame transfer delay based on the Tx timestamp and the Rx timestamp of each test frame;
- computing the frame delay variation based on the frame transfer delays. The embodiment provides solutions for computing the result of the test.
In a third aspect of the invention, it is provided a first network element for executing test, comprising a test generator which generates test traffic and provides it for the QoS processing before sending it to a second network element for executing tests, the test generator comprising:
- one or more units, respectively for one traffic flow, for producing test frames of the flow;
- a scheduler for scheduling the one or more units to produce test frames of the flow;
- a Tx timestamp updater for adding Tx timestamp into the produced test frames .
Preferably, the unit comprises:
- a shaper maintaining tokens based on the test committed information rate of the flow;
- a test frame generator for producing the test frames of the flow;
Wherein, the scheduler selects one flow from the one or more flows according to scheduling algorithms, and being informed by the corresponding shaper about whether the shaper has enough tokens :
- if it has, the scheduler instructs the test frame generator to produce test frames of the flow;
- otherwise, the scheduling step selects the next flow according to the scheduling algorithms. The preferable embodiment provides the modular structure of a general QoS scheduling mechanism for generating the test frames according to the information rate, to remove the expensive timers. In this way, the design can achieve high accuracy and low cost.
In a fourth aspect of the invention, it is provided a second network element for executing test, the second network element receives test traffics sent by a first network element for executing tests, comprising a test checker for detecting test traffic to which the QoS processing has been carried out;
Wherein, the test checker comprises:
- Rx timestamp updater for adding timestamp to the received test frames;
- a classifier for classifying the received test frames into one or more flows of test traffics;
one or more units, respectively for each flow, for calculating the result of the test based on the received test frames of the flow.
These and other features of the present invention will be described in details in the embodiment part.
Brief description of the drawings
Features, aspects and advantages of the present invention will become obvious by reading the following description of non- limiting embodiments with the aid of appended drawings. Fig. 1 shows the modules of the test generator according to a preferred embodiment of the invention;
Fig. 2 shows the modules of the test checker according to a preferred embodiment of the invention;
Fig. 3 shows the schematic view of unavailable duration;
Fig. 4 shows the test frame formats according to a preferred embodiment of the invention.
Wherein, the same or similar reference sign refers to the same or similar component.
Detailed description of embodiments
The invention proposes a method in a first network element for executing tests, comprising the steps of:
- generating test traffics, and providing them for the QoS processing, before sending it to a second network element for executing tests;
Wherein, the generating step comprises:
- scheduling one or more flows of test traffics;
- producing test frames for the scheduled flow;
- adding Tx timestamp into the produced test frames.
Correspondingly, the invention also proposes a method in a second network element for executing tests, comprising the steps of :
- receiving test traffics sent by a first network element for executing tests;
- detecting the test traffic to which the QoS processing has been carried out;
Wherein, the detecting step comprises: - adding Rx timestamp to the received test frames;
- classifying the received test frames into one or more flows of test traffics;
- calculating the result of the test based on the received test frames, respectively for each flow.
Preferably, before the test, the first network element and the second network element receives test configuration from a network management system (NMS) . As all NEs are under the management of NMS, it does not need to use any additional control protocol to notify the local configurations to the remote test functions since the operator can just use NMS to do configuration on both end NEs.
As to the first network element generating the test traffic, the test configuration includes:
- number of services to be tested and corresponding test ID;
- test steps and duration for each step;
- size of the test frame;
- committed information rate;
- pattern of the test frame;
- Ethernet frame header .
As to the second network element receiving the test traffic, the test configuration includes:
- number of service to be tested and corresponding test ID;
- test steps and duration for each step;
- size of the test frame;
- pattern of the test frame.
Preferably, before the test, the first network element and the second network element synchronize the time and frequency with respect to each other by using their own synchronization mechanism. It can make use IEEE 1588 or SyncE, etc to achieve accurate one-way performance test results.
The following elucidation exemplifies a one-way test from the first network element to the second network element. It should be understood that the term first and second are used for only differentiating the test traffic originator and the test traffic receiver, while in the practical implementation, a single network element can be the test traffic originator for one test and also the test traffic receiver for another test.
Firstly, the first network element generates test traffics, and provides them for the QoS processing, before sending it to the second network element. The first network element comprises a test generator for this operation. The test generator generates test flows to simulate the user traffic encapsulated in various transport layers (e.g. Ethernet or IP layer networks) and injects the test flow into the ingress of UNI-Ns.
Specifically, the test generator comprises:
- one or more units 10, respectively for one traffic flow, for producing test frames of the flow;
- a scheduler 12 for scheduling the one or more units to produce test frames of the flow;
- a Tx timestamp updater 14 for adding Tx timestamp into the produced test frames .
Preferably, each unit 10 comprises:
- a shaper 100 for maintaining tokens based on the test committed information rate of the flow; and - a test frame generator 102 for producing the test frames of the flow.
The above modules are shown in figure 1. In figure 1, each set of test frame generator [i] 102 and shaper[i] 100 corresponding to one test flow i.
The challenge in designing Y.1564 test generators is how to keep each test flow generated evenly and strictly at required information rate when multiple test flows are simultaneously generated and input to the same UNI. The proposed design is a simple way to implement multiple test generators to meet above requirements. The shaper 100 is a token bucket generating token and controlling the generated test flow in conformance with the configured information rate. The scheduler 12 executes a scheduling algorithm (e.g. WRR, WFQ) to select one flow, and inquires the shaper 100 corresponding to this flow whether it has enough token. The shaper 100 informs the scheduler 12 that it has enough tokens to allow next generated test frame to pass, then the scheduler 12 controls the test fame generator to generate a test frame and put onto the line. The test frame generator 102 is simple to generate test frames in configured frame size, under the control of the scheduler 12, but does not need a complex algorithm to compute a timer to generate each frame .
The Tx TimeStamp Updater 14 adds Tx TimeStamp into the test frames at line rate, as closely to the instant test frames are actually sent to UNI.
After that, the QoS processing is carried out to the test frames. Specifically, the QoS processing comprises policing (CIR/CBS/PIR/PBS) -> congestion control -> queuing -> scheduling -> shaping.
The test frames are then transmitted on the network to the second network element.
The second network element receives the test traffics sent by the first network element, and carried out QoS processing on the received traffic. The QoS processing comprises the following operat ions :
Classification -> congestion control -> queuing -> scheduling -> and shaping.
After the QoS processing, the test traffic are detected to obtain the result of the test.
The second network element comprises a test checker for detecting test traffics, and the test checker comprises:
- Rx timestamp updater 20 for adding timestamp to the received test frames;
- a classifier 22 for classifying the received test frames into one or more flows of test traffics;
- one or more units 24, respectively for each flow, for calculating the result of the test based on the received test frames of the flow.
Preferably, each unit 24 comprises:
- a counter 240 for counting the number of the received test frames ;
- a computing unit 242 for computing the result of the test based on the number of the received test frames and time informat ion . The above modules are shown in figure 2. In figure 2, each set of Rx counter [i] 240 and computing unit[i] 242 corresponds to one test flow.
The Rx TimeStamp updater 20 is placed at the ingress to add Rx TimeStamp at line rate before this packet is parsed. The classifier 22 identifies each test flow from all egress UNI traffic based on user-configured test flow definitions (e.g. MAC DA, MAC SA, VLAN ID, IP DSCP, etc) . The Rx Counter 240 counts the number of received test frames per test flow. The computing unit 242 calculates the test result per test flow based on Tx Time Stamp, Rx Time Stamp, Rx Counter, and the user configured test parameters per test flow (e.g. test flow IR, test duration), where the test result computation method is proposed and detailed as follows.
To calculate one-way Y.1564 performance parameters such as IR, FLR, FTD, FDV, here proposes a simple computation method. TimeOut is the time to wait the last valid before stop the test for a specific Test ID (test flow) .
• Information Rate (IR)
The information rate is calculated according to the length of the received test frames and the duration.
Average information rate of the received test traffic over Test Duration starting from the instant the first test frame is rece ived .
h of Received Frame i
(ith frame is received during(TFirstrece→ackets, TFirstrece→ackets + TestDuration)) Test Duration
• Frame Loss Rate (FLR)
The frame loss rate is calculated according to the number of the received test frames and the number of transmitted test frames. Wherein, the number of transmitted test frames can be informed by the NMS in the test configuration.
The total FLR is the Ratio of total lost Ethernet frame outcomes to total transmitted Ethernet Frames
^transmitted frames - ^ received frames during ( Tfirst meived fmme, T^,„„,„.,, fmme + TestDuration + TimeOut )
FLR
'^ transmitted frames
• Frame Transfer Delay (FTD)
Frame Transfer Delay (FTD)
A FTD - , (ith frame is received during (Tflrst received frame , Tfinl fra„. + TestDuration + TimeOut ))
^ received frames
Max frame transfer delay
X FTD = frame received ith frame transmitte d
Min fram sfer delay
M _ FTD =
Figure imgf000017_0001
frame received ith frame transmitte d J
• Frame Delay Variation
Average frame delay variation.
^ (FTD^ - FTD, )
A _ FDV = -^=r : :: - , (ith frame is received during „,„l framr , T, - TestDuration + TimeOut )) received frames— 1
Max frame transfer delay.
X _ FDV = max{FTDi+l - FTD^
Min frame transfer delay.
M _FDV = imn{FTDi+l - FTDt }
• AVAIL
AVAIL is the percentage of the service time, and it is calculated according to the test duration starting from the time when the first test frame is received and the unavailable duration in which the frame loss rate is above a threshold for a successive period. As shown in figure 3, the S E S ETH is a severe errored second outcome occurs for a block of frames observed during a one-second interval when the corresponding FLR (i.e. the ratio of lost frames to total frames in the block) exceeds si. A provisioned value si of 0.5 is used, and different values may also be chosen depending on the class of service (CoS) . Once a certain amount (10 eg.) of successive SESETHs is monitored, the link is considered as being unavailable, and the unavailable duration starts from the first S E S ETH of the successive S E S ETH S . The unavailable duration ends before the first non- S E SEiH followed by a certain amount (10 eg.) of successive non-SESEiHS.
AVAIL is the percentage of total scheduled Ethernet Service time that is categorized as available.
AVAIL - TestDuration (startinS fr m T first received - ^ unavailable sec ond
TestDuration (starting from T first received fmme )
Since the size of the Ethernet frame is no less than 64 byte, while the test frame should be further added with padding to simulate any Ethernet frame, therefore the test frame (without padding) preferably does not excess 64 bytes. Therefore, it is better to simplify the test frame format and shorten the test frame length. Since some test information such as test duration, test pattern, test Tx information rate is specific to each test flow, and these information can be informed by the NMS in test configuration, only the test ID identifying the test flow is included in test frame to represent all these information. Besides, from the above computation formula, we can see test frame size, Tx TimeStamp and Sequence Number (for detecting out- of-order) are specific to each test frame and should be carried in each frame. Optionally, we can add End flag in each test frame to notify the test checker that the test flow is ended. The example of test frame format is shown in figure 4. Wherein, DA stands for destination address, SA stands for source address, and FCS stands for frame check sequence. Therefore, the total test frame size is less than 64 bytes, with or without IP header The following table illustrates the comparison between the proposed solution and other three existing solutions (one-way on Test instrument -best existing solution, Round-tr ip/Loopback with EVC loopback on one-end NE , 802. lagLB/Y .1731 TST) . Based on this comparison, we can see the method proposed in invention is a simple and low cost solution which can meet Y.1564 requirements and has the accuracy approximate to Test Instrument
Table 1
Figure imgf000019_0001
Those ordinary skilled in the art could understand and realize modifications to the disclosed embodiments, through studying the description, drawings and appended claims. All such modifications which do not depart from the spirit of the invention are intended to be included within the scope of the appended claims. The word "comprising" does not exclude the presence of elements or steps not listed in a claim or in the description. The word "a" or "an" preceding an element does not exclude the presence of a plurality of such elements. In the practice of present invention, several technical features in the claim can be embodied by one component. In the claims, any reference signs placed between parentheses shall not be construed as limiting the claim.

Claims

What is claimed is:
1. A method in a first network element for executing tests, comprising the steps of:
- generating test traffics, and providing them for the QoS processing, before sending it to a second network element for executing tests;
Wherein, the generating step comprises:
- scheduling one or more flows of test traffics;
- producing test frames for the scheduled flow;
- adding Tx timestamp into the produced test frames.
2. A method according to claim 1, further comprising the step before the generating step:
- receiving test configuration from a network management system;
Said generating step generates the test frames according to the test configuration;
the QoS processing comprises the following operations:
policing, congestion control, queuing, scheduling and shaping .
3. A method according to claim 2, wherein the test configuration comprises any of the following:
- number of services to be tested and corresponding test ID;
- test steps and duration for each step;
- size of the test frame;
- committed information rate; - pattern of the test frame;
- Ethernet frame header;
the generating step further adds any of the following information into the produced test frames:
- Ethernet frame header;
- test ID identifying the flow of the test traffic;
- sequence number of the test traffic in the flow;
the length of the test frame is no more than 64 bytes. 4. A method according to claim 1, further comprising the step before the generating step:
- synchronizing the time and frequency with respect to the second network element by using their own synchronization mechanism .
5. A method according to claim 1, wherein the generating step generates the test frames according to the committed information rate of the flow, wherein,
the one or more flows are provided with tokens based on the test committed information rate of the flow,
the scheduling step selects one flow from the one or more flows according to scheduling algorithms, and determining whether the selected flow has enough tokens:
- if it has, the test frames of the selected flow are produced;
- otherwise, the scheduling step selects the next flow according to the scheduling algorithms.
6. A method in a second network element for executing tests, comprising the steps of:
- receiving test traffics sent by a first network element for executing tests;
- detecting the test traffic to which the QoS processing has been carried out;
Wherein, the detecting step comprises:
- adding Rx timestamp to the received test frames;
- classifying the received test frames into one or more flows of test traffics;
- calculating the result of the test based on the received test frames, respectively for each flow.
7. A method according to claim 6, further comprising the step before the detecting step:
- receiving test configuration from a network management system;
the QoS processing comprises the following operations:
classification, congestion control, queuing, scheduling, and shaping .
88.. AA mmeetthhoodd aaccccoorrddiinngg ttoo ccllaaiimm 77,, wwhheerreeiinn tthhee tteesstt ccoonnffiigguurraattiioonn ccoommpprriisseess aannyy ooff tthhee ffoolllloowwiinngg::
-- nnuummbbeerr ooff sseerrvviiccee ttoo bbee tteesstteedd aanndd ccoorrrreessppoonnddiinngg tteesstt IIDD;;
-- tteesstt sstteeppss aanndd dduurraattiioonn ffoorr eeaacchh sstteepp;;
-- ssiizzee ooff tthhee tteesstt ffrraammee;;
-- ppaatttteerrnn ooff tthhee tteesstt ffrraammee;;
-- sseerrvviiccee aacccceeppttaannccee ccoommpprriissiinngg:: - frame loss rate;
- frame transfer delay;
- frame delay variation;
- percentage of service time.
9. A method according to claim 6, further comprising the step before the detecting step:
- synchronizing the time and frequency with respect to the first network element by using their own synchronization mechanism.
10. A method according to claim 6, wherein, the calculating step comprises the steps:
- counting the number of the received test frames;
- computing the information rate, according to the length of the received test frames and the duration;
- computing the frame loss rate according to the number of the received test frames and the number of transmitted test frames ;
- computing the percentage of the service time based on the test duration starting from the time when the first test frame is received and the unavailable duration in which the frame loss rate is above a threshold for a successive period. 11. A method according to claim 10, wherein, the calculating step further comprises the steps:
- computing the frame transfer delay based on the Tx timestamp and the Rx timestamp of each test frame; - computing the frame delay variation based on the frame transfer delays.
12. A first network element for executing test, comprising a test generator which generates test traffic and provides it for the QoS processing before sending it to a second network element for executing tests, the test generator comprising:
- one or more units (10), respectively for one traffic flow, for producing test frames of the flow;
- a scheduler (12) for scheduling the one or more units to produce test frames of the flow;
- a Tx timestamp updater (14) for adding Tx timestamp into the produced test frames. 13. A first network element according to claim 12, wherein the unit (10) comprises:
- a shaper (100) for maintaining tokens based on the test committed information rate of the flow;
- a test frame generator (102) for producing the test frames of the flow;
Wherein, the scheduler (12) selects one flow from the one or more flows according to scheduling algorithms, and being informed by the corresponding shaper (100) about whether the shaper (100) has enough tokens:
- if it has, the scheduler (12) instructs the test frame generator (102) to produce test frames of the flow;
- otherwise, the scheduler (12) selects the next flow according to the scheduling algorithms.
14. A second network element for executing test, the second network element receives test traffics sent by a first network element for executing tests, comprising a test checker for detecting test traffic to which the QoS processing has been carried out;
Wherein, the test checker comprises:
- Rx timestamp updater (20) for adding timestamp to the received test frames;
- a classifier (22) for classifying the received test frames into one or more flows of test traffics;
- one or more units (24), respectively for each flow, for calculating the result of the test based on the received test frames of the flow.
15. A second network element for executing test, wherein the unit (24) comprises:
- a counter (240) for counting the number of the received test frames;
- a computing unit (242) for computing the result of the test based on the number of the received test frames and time informat ion .
PCT/IB2013/001237 2012-07-20 2013-05-27 Network service testing WO2014013303A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201210253533.1A CN103580936A (en) 2012-07-20 2012-07-20 Method for executing tests in network elements and corresponding network elements
CN201210253533.1 2012-07-20

Publications (1)

Publication Number Publication Date
WO2014013303A1 true WO2014013303A1 (en) 2014-01-23

Family

ID=48748298

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IB2013/001237 WO2014013303A1 (en) 2012-07-20 2013-05-27 Network service testing

Country Status (2)

Country Link
CN (1) CN103580936A (en)
WO (1) WO2014013303A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020185752A1 (en) * 2019-03-12 2020-09-17 Arch Systems Inc. System and method for network communication monitoring

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105357036A (en) * 2015-10-21 2016-02-24 盛科网络(苏州)有限公司 Simulative object-oriented QoS (Quality of Service) verification model and method

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120170465A1 (en) * 2011-01-04 2012-07-05 Alcatel Lucent Usa Inc. Validating ethernet virtual connection service

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB0316891D0 (en) * 2003-07-18 2003-08-20 British Telecomm Test device for data services
US20070110034A1 (en) * 2005-11-14 2007-05-17 Broadcom Corporation, A California Corporation Pathways analysis and control in packet and circuit switched communication networks
CN100502325C (en) * 2005-12-13 2009-06-17 华为技术有限公司 Comprehensive detector for communication access device
CN100384162C (en) * 2006-01-16 2008-04-23 中国移动通信集团公司 Automatization testing device and method for service system
CN102394795B (en) * 2011-11-04 2013-10-30 盛科网络(苏州)有限公司 Throughput performance test processing engine embedded into Ethernet exchange chip and implementation method therefor

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120170465A1 (en) * 2011-01-04 2012-07-05 Alcatel Lucent Usa Inc. Validating ethernet virtual connection service

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
"Ethernet service activation test methodology; Y.1564 (03/11)", ITU-T STANDARD, INTERNATIONAL TELECOMMUNICATION UNION, GENEVA ; CH, no. Y.1564 (03/11), 1 March 2011 (2011-03-01), pages 1 - 36, XP017467618 *
BRUNO GIGUÃRE ET AL: "Latest Draft for New Recommendation Y.156sam;xxx (GEN/12)", ITU-T DRAFT ; STUDY PERIOD 2009-2012, INTERNATIONAL TELECOMMUNICATION UNION, GENEVA ; CH, vol. 17/12, 20 September 2010 (2010-09-20), pages 1 - 30, XP017439493 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020185752A1 (en) * 2019-03-12 2020-09-17 Arch Systems Inc. System and method for network communication monitoring
US10892971B2 (en) * 2019-03-12 2021-01-12 Arch Systems Inc. System and method for network communication monitoring

Also Published As

Publication number Publication date
CN103580936A (en) 2014-02-12

Similar Documents

Publication Publication Date Title
US7733794B2 (en) Performance monitoring of frame transmission in data network OAM protocols
EP2884697B1 (en) Measuring method, device and system for network packet loss
CN100382517C (en) Network QoS test method and system
WO2014047941A1 (en) Network delay measuring method, device, and system
EP3398296B1 (en) Performance measurement in a packet-switched communication network
WO2006133635A1 (en) A method for measurting the performance parameters of the multi-protocol label switching network
US9584396B2 (en) Label-based measurement method, apparatus, and system
Prokkola et al. Measuring WCDMA and HSDPA delay characteristics with QoSMeT
KR20070047928A (en) Method for measuring stage-to-stage delay in nonsynchronization packet transfer network, nonsynchronization packet sender and receiver
US11121938B2 (en) Performance measurement in a packet-switched communication network
Joung et al. Zero jitter for deterministic networks without time-synchronization
Garner et al. IEEE 802.1 AVB and its application in carrier-grade Ethernet [standards topics]
WO2014013303A1 (en) Network service testing
Baldi et al. Time-driven priority router implementation: Analysis and experiments
Al-Hares et al. Scheduling in an Ethernet fronthaul network
CN107835109B (en) Method and system for testing packet transport network defined by software
RU2687040C1 (en) Method and apparatus for monitoring a backbone network
Dominguez-Jaimes et al. Identification of traffic flows in Ethernet-based industrial fieldbuses
Arokkiam A quality of service framework for upstream traffic in LTE across an XG-PON backhaul
Nasrallah Time Sensitive Networking in Multimedia and Industrial Control Applications
Tang et al. A performance monitoring architecture for IP videoconferencing
Al-Hares Ethernet Fronthaul and Time-Sensitive Networking for 5G and Beyond Mobile Networks
Jia et al. Deploying Circuit Emulation Services (CES) Over EPON Using Preemptive Priority Medium Access Controller
Wang et al. Design and Implement Differentiated Service Routers in OPNET
Peculea et al. Benchmarking System for QoS Parameters

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 13734836

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 13734836

Country of ref document: EP

Kind code of ref document: A1