GFP-F & GFP-T

GFP-F (Generic Frame Procedure – Framed)
Framed GFP encapsulates bursty client traffic such as Ethernet and also RPR. The received client signal frame is mapped in its entirety into one GFP frame. GFP-F works relative to client frame boundaries (e.g.: 1 SONET frame per GFP-F frame).

GFP-T (Generic Frame Procedure -Transparent)
GFP-T works at the byte-level for low-latency applications such as Fiber Channel transport for SAN networking.GFP-T creates a superblock structure that combines multiple 64B/65B codes along with a CRC-16, for the purposes of providing payload octet alignment and error control over the bits in the superblock. The block-coded client characters are decoded and then mapped into a fixed-length GFP frame and then transmitted without waiting for the reception of an entire client data frame.

Advertisements

VCAT (Virtual Concatenation)

VCAT is specified by ITU-T G.707 and enables end-to-end connections that precisely match the client requirements without wasting bandwidth. High-order VCAT provides a 52Mb/s (STS-1) or 155Mb/s (AU-4) resolution.
Low-order VCAT provides a 1.5Mb/s (VT 1.5) or 2Mb/s (VC-12) resolution. VCG members need not be in adjacent channels and can take different paths through the network, so ingress and egress elements must be able to tolerate up to 256 ms of differential delay between VCG members.

SDH

Introduction:

SDH/SONET development was originally driven by the need to transport multiple E1, E3 along with the other groups of multiplexed 64Kbps PCM voice traffic. The ability to transport ATM traffic was another early application.

As seen in year 2000, Telecommunication has shifted from the traditional voice transport to data transport, although digitalized voice is still large contribution. Instead of an evolution of the existing transport standards, an evolution was necessary to enable the additional data related transport. The revolution is the need of time to share the concept on the new SDH/SONET concepts.

In today status, SDH/SONET is the deployed technology in the core network with huge investment in capacity. Ethernet is the dominant technology of choice at LANs and well know at all enterprise worldwide. Data traffic is still growing, but only at slower speed than expected. All the network topologies focusing on an IP/Ethernet only approach are shifting to long term future. So the today’s future is bringing SDH/SONET and ETHERNET together.

The customer expects from the operator: Quality of service & bandwidth at lower costs, native data interfaces and use & improve what he knows. On the other hand operator wants: reduce operational costs, realize revenue-earning service, use bandwidth of core network, low investment, immediate ROI and close the edge bottleneck. So, to solve all these problem of customers and operator is to make SDH/SONET flexible and data aware at the edge and still use the existing core.

Ethernet vs. SONET/SDH

Ethernet                        SDH/SONET
Mass Market                      Carrier class market
Asynchronous                     Synchronous
Dynamic                          Fixed Bandwidth
Connection less                  Connection oriented
Best effort service              High quality of service.

Now, the question is how to solve all these challenges? This is the starting point of New SDH/SONET.


Concatenation:

In order to support large ATM bandwidths, the technique of concatenation was developed, whereby smaller SDH/SONET multiplexing containers (e.g. VC12/STS-1) are inversely multiplexed to buildup large container (e.g. VC4/STS-3c) to support large data-oriented pipes. SDH/SONET is therefore to transport both voice and data simultaneously.

Concatenation is the process with which the multiple virtual container are associated with one another, with the result their combined could be use as a single container across which bit
sequence integrity is maintained.

One problem with traditional concatenation, however, is inflexibility. Depending on the data
and voice traffic mix that must be carried, there can be a large amount of unused bandwidth
leftover, due to the fixed sizes of concatenated containers. For example, fitting a 100Mbits/sec
Fast Ethernet connection inside a 155Mbits/sec VC4/STS-3c container leads to considerable waste.

Now let us discuss how we can efficiently transport Ethernet over an existing SDH/SONET networks. Take an example of 10MB Ethernet over SDH. If we use one no. of VC12 for 10MB Ethernet, it is too small because the payload of VC12 is only 2.176Mbits/sec again we use one VC3 it is inefficient because payload of VC3 is 48.384Mbits/sec. If 5 x VC12 are concatenated then 10MB Ethernet can efficiently transport over SDH/SONET.

Concatenation has two types:-

1. Contiguous Concatenation and
2. Virtual Concatenation.
Contiguous concatenation offers concatenated payloads in fixed, large steps, one towing truck (POH) for all containers and all containers are on one path through the networks.

Virtual concatenation (VC or VCAT) offers structures in fine granularity, every container has its own towing truck (POH) and every container might take a different path. Virtual Concatenation allows for a more arbitrary assembly of lower order multiplexing container, building large container of fairly arbitrary size without the need for intermediate SDH/SONET NEs to support this particulars form of concatenation. Virtual concatenation is standardizes with SDH containers (ITU-T G.707) or SONET containers (ANSI T.105). Virtual concatenation provides a scheme to build right sized SONET/SDH containers.

Virtual Concatenation (VCAT) Nomenclature is express as

VC-n-Xv

Where VC-n, n=4,3,2,12,11, define the type of virtual containers, which will be virtually concatenated. X, Number of virtually concatenated containers, v, indicate for virtual concatenation.

All X virtual containers form together the Virtual Concatenated Group (VCG). Higher order virtual Concatenation refer to virtually concatenated of Vc4 and VC3 and Lower order concatenation refers to virtually concatenated of VC2,VC12 and VC11. Virtual concatenation is consider the primary enhancement to voice optimized SDH/SONET, in order to support the transport of variable bit data streams. Other recent SDH/SONET enhancements include Link Capacity Adjustment Scheme (LCAS), and the Generic Framing Procedure (GFP). In conjunction with LCAS and GFR, VCAT gives the advantage of splitting the required bandwidth equally among a set number of sub paths called Virtual Tributaries (VT). Basically, several Virtual Tributaries, form part of a Virtual Concatenation Group. The pragmatic use of spawning Virtual Tributaries to transport data across a VCAT enable network is the fact that in many cases, particularly when the underlying network is relatively congested, then splitting the traffic over several distinct paths allows us to provide lower cost solutions than if we had to find just one path that meets the required capacity. Chances are this splitting of paths will also allow us to find shorter paths to channel our traffic across.

The VCAT protocol performs its content delivery through a process called byte-interleaving. For example, given that we wish to provision a Gigabit Ethernet (n, 1Gb/s) service then we would provision it across VC-4-7v virtual tributaries’, where each of the VCG member carry a bandwidth equivalent of V=n/k (bits/second) , where in this case n=1Gb and k=7. What typically happens is that the data is sent such that the rth byte is put onto VT1, and the (r+1)th byte is sent out on VT2, and so on, until a loop back is created sending the next byte back on VT1. Effectively, this helps a lot with being able to provide services at lower cost and much faster than Contiguous concatenation, it however is intrinsically bound to the problem of differential delay where each path that is created, represent by a VT has a different propagation delay across the networks and the difference in the these delays is what is known as differential delay (D). The major problem with differential delay is the fact that we required high speed buffers at the receiving node to store incoming information while all paths converge.

The buffer space can be equated to the bandwidth delay product such that B=n*D. so for each VCAT connection we would requires B bits of buffer space. This need for buffer space eventually increases the network cost, so it is very important to select paths that minimize the differential delay, which is directly proportional to the buffer space required.

Link Capacity Adjustment Scheme (LCAS

LCAS is method to dynamically increase or decrease the bandwidth of virtual concatenated containers. The LCAS protocol is specified in ITU-T G.7042. It allow on demand increase or decrease of the bandwidth of virtual concatenated group in a hitless manner. This brings band\width on demand capability for data client like Ethernet when mapped into TDM containers.

LCAS is also able to temporarily remove failed members from the virtual concatenation group. A failed member will automatically cause a decrease of the bandwidth and after repair the bandwidth will increase again in a hitless fashion. Together with diverse routing this provides survivability of data traffic without requiring excess protection bandwidth allocation.

Generic Framing Procedure (GFP) :

GFP is defined by ITU-T G.7041. This allows mapping of variable length, higher-layer client signals over a transport network like SDH/SONET. The client signals can be protocol data unit (PDU) oriented (like IP/PP or Ethernet Media Access Control) or can be block-code oriented (like fiber channel).

There are two modes of GFP: Generic Frame Procedure-Framed (GFP-F) and Generic Framed Procedure-Transparent (GFP-T). GFP-F maps each client frame into a single GFP frame. GFP-T, on the other hand, allows mapping of multiple 8B/10B block-coded client data streams into an efficient 64B/65B block code for transport within a GFP frame. GFP utilized a length/HEC based frame delineation mechanism that is more robust than that used by High-Level Data Link Control (HDLC), which is single octet flag based.

There are two types of GFP frames: a GFP client frame and a GFP control frame. A GFP client frame can be further classified as either a client data frame or a client management frame. The former is used to transport client data, while the latter is used to transport point-to-point management information like loss of signal, etc. Client management frame can be differentiated from the client data frame based on the payload type indicator. The GFP control frame currently consists only of core header field with no payload area. This frame is used to compensate for the gaps between the4 client signal where the transport medium has higher capacity than the client signal and is better known as an idle frame.

GFP frame consists of

1.A core header, 2.A payload header, 3.An optional extension header, 4.A GFP payload and 5.

An optional payload frame check sequence (FCS).

GFP-F is optimized for bandwidth efficiency at the expense of the latency. It encapsulates complete Ethernet (or other types of frames with a GFP header). GFP-T is used for low latency transport of block-coded client signals such as GbE, Fibre channel, ESCON, FiCON, and Digital Video Broadcast (DVB), In this mode, small group of 8B/10B symbols are transmitted rather than waiting for complete frame data.

MPLS Layer-3 VPNs

The layer-3 approach to creating MPLS-based VPNs offers a routed solution to the problem The de facto standard for implementing such VPNs is described in “RFC 2547”, with a new version, currently, under development referred to as 2547bis which is described in “draft-ietf-ppvpn-rfc2547bis-01.txt”. The approach is also referred to as BGP/MPLS VPNs.

The approach relies on taking customer IP datagrams from a given site, looking up the destination IP address of the datagram in a forwarding table, then sending that datagram to its destination across the provider’s network using an LSP.

In order for the service provider routers to acquire reachability information about a given customer’s networks, the provider edge (PE) routers exchange routes with the customer edge (CE) routers. Hence, the BGP/MPLS VPNs approach follows the peer to peer model of VPNs. These routes are propagated to other PE routers carrying the same VPN(s) via BGP. However, they are never shared with the provider’s core routers (P), since the PEs use LSPs to forward packets from one PE to the other. P routers do not need to know about the customer’s networks in order to perform their label switching functions. A PE router receiving routes of a given VPN site from another PE, propagates the routes to the CE router of the connected site belonging to that same VPN, so that the CE will also learn about the networks in the remote site.

The mechanisms behind BGP/MPLS VPNs were designed to address some of the shortcomings of the pure layer-3 VPNs (without tunneling) that preceded it. Some of the main goals were:
• Supporting globally unique IP addresses on the customer side, as well as private nonunique – and hence, overlapping – addresses.
• Supporting overlapping VPNs, where one site could belong to more than one VPN.

Since this type of VPNs relies on routing, achieving the abovementioned goals could be a challenge. To address the problem of overlapping address spaces in customer VPNs, multiple routing and forwarding tables, referred to as VPN Routing and Forwarding (VRF) tables, are created on each PE router, in order to separate the routes belonging to different VPNs on a PE router.

A VRF table is created for each site connected to the PE, however, if there were multiple sites belonging to the same VPN connected to the same PE, these sites might share a single VRF table on that PE. A site that is a member of multiple VPNs is not a candidate for VRF table sharing with other sites that are not members of exactly the same set of VPNs. Such a site must have its own VRF table, which includes routes from all the VPNs it is a member of.

Another implication of the overlapping address spaces problem is that a PE router receiving BGP updates from its neighbors might receive conflicting or overlapping routes – belonging to different VPNs. In order to identify the advertised routes as belonging to different VPNs, and hence, prevent the BGP process from selecting one – the best – and ignoring the rest, an 8 octet Route Distinguisher (RD) is prepended to each prefix advertised. This is used to distinguish routes belonging to different VPNs on the BGP receiver side. The result of prepending the RD to the 4 octet IP prefix is a 12 octet address for which a new special address family was defined, the VPN-IPv4 family. Hence, to be precise, multi protocol BGP is used to carry such prefixes.

Route Distinguishers provide nothing more than a way of differentiating routes. They play no role in controlling route distribution. An RD is assigned to a VRF, so that prefixes advertised from that VRF will have that RD prepended to them. Typically, it makes sense to assign the same RD to the VRFs of sites belonging to the same VPN, so that all the routes of that VPN will have the same distinguisher. So, it could be said that RDs are typically assigned uniquely to each VPN. However, this should not mean that VRFs of sites that belong to multiple VPNs get multiple RDs. VRFs of such sites need only one RD. For those sites, as well as those that are members of only one VPN, controlling the distribution of routers is performed as described below.

To prevent a PE router from accepting routes of VPNs that it doesn’t carry, and hence, waste its own resources, BGP extended communities are put to use in order to control the distribution of routes within the provider’s network. The extended community attribute Route Target is included with the advertised route(s) to indicate which VPN – or the group of sites in certain topologies – the route belongs too. A unique value for this attribute is assigned to each customer VPN. A PE router keeps track of those Route Target values associated with the VPNs that it carries. Upon receipt of an advertised route, the BGP process checks the Route Target to see if it is equal to the Route Target value of one of the VPNs that it carries. In case of a match, the route is accepted, if not, the route is ignored. This is to avoid having all the PE routers carrying all the routes of all the customer VPNs, which might severely limit the scalability of the solution.

Figure 1 illustrates the main concepts behind the BGP/MPLS VPNs approach.

Figure 1 The BGP/MPLS VPN approach.

From the discussion above, it could be seen that the approach allows for creating overlapping VPNs. This is intended for scenarios like when a customer needs a VPN for their intranet, and another for their extranet with a different set of routes advertised in each to control the accessibility of resources. Such a customer would rely on the service provider to perform the required route control, i.e., route control is shifted from the CE router and delegated to the PE router. In Figure 1, Customer A, Site 1, lies in both VPN 1 and VPN 2. The routes of that site are advertised by the connected PE router with one RD, however, with two Route Target extended community attributes: one for VPN 1, the other for VPN 2. The connected PE router, also, accepts routes from the other PE routers, only if the routes have Route Target values equal to that value of either VPN 1 or VPN 2 – since these are the only VPNs carried by this router in this example.

When advertising a VPN-IPv4 route, the PE also includes an MPLS label – representing the route – in the BGP message, and it sets the BGP NEXT_HOP equal to its own address. The provider network is MPLS enabled, and each PE router should be capable of reaching any of the other PEs via an LSP. Those LSPs could be created by any protocol like LDP or RSVP/TE.

When a PE receives a packet with a destination in a remote site, it attaches two MPLS labels to the packet in order to forward it to its destination. The outer label is for the LSP leading to the BGP NEXT_HOP. The inner label is the label associated with that destination, learned previously from a BGP update received from a peer. The PE, then, sends the frame out the port associated with that LSP. The frame gets label switched all the way to the remote PE, which then, pops the outer label, and examines the inner label. The inner label, in most cases, uniquely identifies the destination, therefore, it is popped and the packet is forwarded to its destination. In some cases, where route summarization is done on the PE, the receiving PE uses the inner label to determine which VRF to look into in order to know where to send the packet.

Figure 2 Two labels are attached to an IP datagram to be forwarded to its destination.