Jump to content

Y.1564

From Wikipedia, the free encyclopedia
(Redirected from ITU-T Y.1564)

ITU-T Y.1564 is an Ethernet service activation test methodology, which is the new ITU-T standard for turning up, installing and troubleshooting Ethernet-based services. It is the only standard test methodology that allows for complete validation of Ethernet service-level agreements (SLAs) in a single test.

Purposes

[edit]

ITU-T Y.1564 is designed to serve as a network service level agreement (SLA) validation tool, ensuring that a service meets its guaranteed performance settings in a controlled test time, to ensure that all services carried by the network meet their SLA objectives at their maximum committed rate, and to perform medium- and long-term service testing, confirming that network elements can properly carry all services while under stress during a soaking period.[citation needed]

ITU-T Y.1564 defines an out-of-service test methodology to assess the proper configuration and performance of an Ethernet service prior to customer notification and delivery. The test methodology applies to point-to-point and point-to-multipoint connectivity in the Ethernet layer and to the network portions that provide, or contribute to, the provisioning of such services. This recommendation does not define Ethernet network architectures or services, but rather defines a methodology to test Ethernet-based services at the service activation stage.[citation needed]

Existing test methodologies: RFC 2544

[edit]

The Internet Engineering Task Force RFC 2544 is a benchmarking methodology for network interconnect devices. This request for Comments (RFC) was created in 1999 as a methodology to benchmark network devices such as hubs, switches and routers as well as to provide accurate and comparable values for comparison and benchmarking.[citation needed]

RFC 2544 provides engineers and network technicians with a common language and results format. The RFC 2544 describes six subtests:[citation needed]

  • Throughput: Measures the maximum rate at which none of the offered frames are dropped by the device/system under test (DUT/SUT). This measurement translates into the available bandwidth of the Ethernet virtual connection.
  • Back-to-back or burstability: Measures the longest burst of frames at maximum throughput or minimum legal separation between frames that the device or network under test will handle without any loss of frames. This measurement is a good indication of the buffering capacity of a DUT.
  • Frame loss: Defines the percentage of frames that should have been forwarded by a network device under steady state (constant) loads that were not forwarded due to lack of resources. This measurement can be used for reporting the performance of a network device in an overloaded state, as it can be a useful indication of how a device would perform under pathological network conditions such as broadcast storms.
  • Latency: Measures the round-trip time taken by a test frame to travel through a network device or across the network and back to the test port. Latency is the time interval that begins when the last bit of the input frame reaches the input port and ends when the first bit of the output frame is seen on the output port. It is the time taken by a bit to go through the network and back. Latency variability can be a problem. With protocols like voice over Internet protocol (VoIP), a variable or long latency can cause degradation in voice quality.
  • System reset: Measures the speed at which a DUT recovers from a hardware or software reset. This subtest is performed by measuring the interruption of a continuous stream of frames during the reset process.
  • System recovery: Measures the speed at which a DUT recovers from an overload or oversubscription condition. This subtest is performed by temporarily oversubscribing the device under test and then reducing the throughput at normal or low load while measuring frame delay in these two conditions. The difference between delay at overloaded condition and the delay and low load conditions represent the recovery time.[citation needed]

From a laboratory and benchmarking perspective, the RFC 2544 methodology is an ideal tool for automated measurement and reporting.[citation needed] From a service turn-up and troubleshooting perspective, RFC 2544, although acceptable and valid, does have some drawbacks:[citation needed]

  • Service providers are shifting from only providing Ethernet pipes to enabling services. Networks must support multiple services from multiple customers, and each service has its own performance requirements that must be met even under full load conditions and with all services being processed simultaneously. RFC 2544 was designed as a performance tool with a focus on a single stream to measure maximum performance of a DUT or network under test and was never intended for multiservice testing.[citation needed]
  • With RFC 2544's focus on identifying the maximum performance of a device or network under test, the overall test time is variable and heavily depends on the quality of the link and subtest settings. RFC 2544 test cycles can easily require a few hours of testing. This is not an issue for lab testing or benchmarking, but becomes a serious issue for network operators with short service maintenance windows.[citation needed]
  • Packet delay variation is a key performance indicator (KPI) for real-time services such as VoIP and Internet Protocol television (IPTV) and is not measured by the RFC 2544 methodology. Network operators that performed service testing with RFC 2544 typically must execute external packet jitter testing outside of RFC 2544 as this KPI was not defined or measured by the RFC.[citation needed]
  • Testing is performed sequentially on one KPI after another. In today's multiservice environments, traffic is going to experience all KPIs at the same time, although throughput might be good, it can also be accompanied by very high latency due to buffering. Designed as a performance assessment tool, RFC 2544 measures each KPI individually through its subtest and therefore cannot immediately associate a very high latency with a good throughput, which should be cause for concern.[citation needed]

Service definitions

[edit]

The ITU-T Y.1564 defines test streams (individually called a "Test Flow") with service attributes linked to the Metro Ethernet Forum (MEF) 10.2 definitions.[citation needed] Test Flows are traffic streams with specific attributes identified by different classifiers such as 802.1q VLAN, 802.1ad, DSCP and class of service (CoS) profiles. These services are defined at the user–network interface (UNI) level with different frame and bandwidth profile such as the service's maximum transmission unit (MTU) or frame size, committed information rate (CIR), and excess information rate (EIR). A single Test Flow is also able to consist of up to 5 different frame sizes called an EMIX (Ethernet Mix). This flexibility allows the engineer to configure a Test Flow very close to real-world traffic.[citation needed]

Test rates

[edit]

The ITU Y.1564 defines three key test rates based on the MEF service attributes for Ethernet virtual connection (EVC) and user-to-network interface (UNI) bandwidth profiles.[citation needed]

  • CIR defines the maximum transmission rate for a service where the service is guaranteed certain performance objectives. These objectives are typically defined and enforced via SLAs.
  • EIR defines the maximum transmission rate above the committed information rate considered as excess traffic. This excess traffic is forwarded as capacity allows and is not subject to meeting any guaranteed performance objectives (best effort forwarding).
  • Overshoot rate defines a testing transmission rate above CIR or EIR and is used to ensure that the DUT or network under test does not forward more traffic than specified by the CIR or EIR of the service.

Service configuration test

[edit]

Forwarding devices such as switches, routers, bridges and network interface units are the basis of any network as they interconnect segments. If a service is not correctly configured on any one of these devices within the end-to-end path, network performance can be greatly affected, leading to potential service outages and network-wide issues such as congestion and link failures.[citation needed] The Service configuration test is designed to measure the ability of DUT or network under test to properly forward in different states:

  • In the CIR phase, where performance metrics for the service are measured and compared to the SLA performance objectives.
  • In the EIR phase, where performance is not guaranteed and the services transfer rate is measured to ensure that CIR is the minimum bandwidth.
  • In the discard phase, where the service is generated at the overshoot rate and the expected forwarded rate is not greater than the committed information rate or excess rate (when configured).
  • In the CBS (Committed Burst Size) phase, performance metrics are measured while changing traffic from the CIR to the line rate.
  • In the EBS (Excess Burst Size) phase, performance metrics are measured while changing traffic from the EIR to the line rate.

Service performance test

[edit]

As network devices come under load, they must prioritize one traffic flow over another to meet the KPIs set for each traffic class. With only one traffic class, there is no prioritization performed by the network devices since there is only one set of KPIs. As the number of traffic flows increase, prioritization is necessary and performance failures may occur.[citation needed] The service performance test measures the ability of the DUT or network under test to forward multiple services while maintaining SLA conformance for each service. Services are generated at the CIR, where performance is guaranteed, and pass/fail assessment is performed on the KPI values for each service according to its SLA.[citation needed]

Service performance assessment must also be maintained for a medium- to long-term period as performance degradation will likely occur as the network is under stress for longer period of times. The service performance test is designed to soak the network under full committed load for all services and measure performance over medium and long test time. The time frame to complete this section of the test is recommended to follow ITU-T M.2110 which mentions intervals of 15min, 2hour or 24hour allowing network availability to be determined.[citation needed]

Metrics

[edit]

The Y.1564 focuses on the following KPIs for service quality:[citation needed]

  • Bandwidth or Information rate (IR): This is a bit rate measure of available or consumed data communication resources expressed in bits/second or multiples of it (kilobits/s, megabits/s, etc.)
  • Frame transfer delay (FTD): Also known as latency, this is a measurement of the time delay between the transmission and the reception of a frame. Typically this is a round-trip measurement, meaning that the calculation measures both the near-end to far-end and far-end to near-end direction simultaneously.
  • Frame delay variations (FDV): Also known as packet jitter, this is a measurement of the variations in the time
 delay between packet deliveries. As packets travel through a network to their destination, they are often queued and sent in bursts to the next hop. There may be prioritization at random moments also resulting in packets being sent at random rates. Packets are therefore received at irregular intervals. The direct consequence of this jitter is stress on the receiving buffers of the end nodes where buffers can be overused or underused when there are large wings of jitter.
  • Frame loss ratio (FLR): Typically expressed as a ratio, this is a measurement of the number of packets lost over the total number of packets sent. Frame loss can be due to a number of issues such as network congestion or errors during transmissions.
  • Frame loss ratio with reference to the SAC: Typically expressed as a Pass / Fail indication. SAC (Service Acceptance Criteria) is the part of the network operators SLA which references the FLR requirement for the network path under test.
  • Availability (AVAIL): Typically expressed as a % of up time for link under test for example does the network pass the 5 "9's" 99.999% up time.

See also

[edit]

References

[edit]
[edit]