SDH/SONET development was originally driven by the need to transport multiple E1, E3 along with the other groups of multiplexed 64Kbps PCM voice traffic. The ability to transport ATM traffic was another early application.
As seen in year 2000, Telecommunication has shifted from the traditional voice transport to data transport, although digitalized voice is still large contribution. Instead of an evolution of the existing transport standards, an evolution was necessary to enable the additional data related transport. The revolution is the need of time to share the concept on the new SDH/SONET concepts.
In today status, SDH/SONET is the deployed technology in the core network with huge investment in capacity. Ethernet is the dominant technology of choice at LANs and well know at all enterprise worldwide. Data traffic is still growing, but only at slower speed than expected. All the network topologies focusing on an IP/Ethernet only approach are shifting to long term future. So the today’s future is bringing SDH/SONET and ETHERNET together.
The customer expects from the operator: Quality of service & bandwidth at lower costs, native data interfaces and use & improve what he knows. On the other hand operator wants: reduce operational costs, realize revenue-earning service, use bandwidth of core network, low investment, immediate ROI and close the edge bottleneck. So, to solve all these problem of customers and operator is to make SDH/SONET flexible and data aware at the edge and still use the existing core.
Ethernet vs. SONET/SDH
Mass Market Carrier class market
Dynamic Fixed Bandwidth
Connection less Connection oriented
Best effort service High quality of service.
In order to support large ATM bandwidths, the technique of concatenation was developed, whereby smaller SDH/SONET multiplexing containers (e.g. VC12/STS-1) are inversely multiplexed to buildup large container (e.g. VC4/STS-3c) to support large data-oriented pipes. SDH/SONET is therefore to transport both voice and data simultaneously.
Concatenation is the process with which the multiple virtual container are associated with one another, with the result their combined could be use as a single container across which bit
sequence integrity is maintained.
One problem with traditional concatenation, however, is inflexibility. Depending on the data
and voice traffic mix that must be carried, there can be a large amount of unused bandwidth
leftover, due to the fixed sizes of concatenated containers. For example, fitting a 100Mbits/sec
Fast Ethernet connection inside a 155Mbits/sec VC4/STS-3c container leads to considerable waste.
Now let us discuss how we can efficiently transport Ethernet over an existing SDH/SONET networks. Take an example of 10MB Ethernet over SDH. If we use one no. of VC12 for 10MB Ethernet, it is too small because the payload of VC12 is only 2.176Mbits/sec again we use one VC3 it is inefficient because payload of VC3 is 48.384Mbits/sec. If 5 x VC12 are concatenated then 10MB Ethernet can efficiently transport over SDH/SONET.
Concatenation has two types:-
1. Contiguous Concatenation and
2. Virtual Concatenation.
Contiguous concatenation offers concatenated payloads in fixed, large steps, one towing truck (POH) for all containers and all containers are on one path through the networks.
Virtual concatenation (VC or VCAT) offers structures in fine granularity, every container has its own towing truck (POH) and every container might take a different path. Virtual Concatenation allows for a more arbitrary assembly of lower order multiplexing container, building large container of fairly arbitrary size without the need for intermediate SDH/SONET NEs to support this particulars form of concatenation. Virtual concatenation is standardizes with SDH containers (ITU-T G.707) or SONET containers (ANSI T.105). Virtual concatenation provides a scheme to build right sized SONET/SDH containers.
Virtual Concatenation (VCAT) Nomenclature is express as
Where VC-n, n=4,3,2,12,11, define the type of virtual containers, which will be virtually concatenated. X, Number of virtually concatenated containers, v, indicate for virtual concatenation.
All X virtual containers form together the Virtual Concatenated Group (VCG). Higher order virtual Concatenation refer to virtually concatenated of Vc4 and VC3 and Lower order concatenation refers to virtually concatenated of VC2,VC12 and VC11. Virtual concatenation is consider the primary enhancement to voice optimized SDH/SONET, in order to support the transport of variable bit data streams. Other recent SDH/SONET enhancements include Link Capacity Adjustment Scheme (LCAS), and the Generic Framing Procedure (GFP). In conjunction with LCAS and GFR, VCAT gives the advantage of splitting the required bandwidth equally among a set number of sub paths called Virtual Tributaries (VT). Basically, several Virtual Tributaries, form part of a Virtual Concatenation Group. The pragmatic use of spawning Virtual Tributaries to transport data across a VCAT enable network is the fact that in many cases, particularly when the underlying network is relatively congested, then splitting the traffic over several distinct paths allows us to provide lower cost solutions than if we had to find just one path that meets the required capacity. Chances are this splitting of paths will also allow us to find shorter paths to channel our traffic across.
The VCAT protocol performs its content delivery through a process called byte-interleaving. For example, given that we wish to provision a Gigabit Ethernet (n, 1Gb/s) service then we would provision it across VC-4-7v virtual tributaries’, where each of the VCG member carry a bandwidth equivalent of V=n/k (bits/second) , where in this case n=1Gb and k=7. What typically happens is that the data is sent such that the rth byte is put onto VT1, and the (r+1)th byte is sent out on VT2, and so on, until a loop back is created sending the next byte back on VT1. Effectively, this helps a lot with being able to provide services at lower cost and much faster than Contiguous concatenation, it however is intrinsically bound to the problem of differential delay where each path that is created, represent by a VT has a different propagation delay across the networks and the difference in the these delays is what is known as differential delay (D). The major problem with differential delay is the fact that we required high speed buffers at the receiving node to store incoming information while all paths converge.
The buffer space can be equated to the bandwidth delay product such that B=n*D. so for each VCAT connection we would requires B bits of buffer space. This need for buffer space eventually increases the network cost, so it is very important to select paths that minimize the differential delay, which is directly proportional to the buffer space required.
Link Capacity Adjustment Scheme (LCAS
LCAS is method to dynamically increase or decrease the bandwidth of virtual concatenated containers. The LCAS protocol is specified in ITU-T G.7042. It allow on demand increase or decrease of the bandwidth of virtual concatenated group in a hitless manner. This brings band\width on demand capability for data client like Ethernet when mapped into TDM containers.
LCAS is also able to temporarily remove failed members from the virtual concatenation group. A failed member will automatically cause a decrease of the bandwidth and after repair the bandwidth will increase again in a hitless fashion. Together with diverse routing this provides survivability of data traffic without requiring excess protection bandwidth allocation.
Generic Framing Procedure (GFP) :
GFP is defined by ITU-T G.7041. This allows mapping of variable length, higher-layer client signals over a transport network like SDH/SONET. The client signals can be protocol data unit (PDU) oriented (like IP/PP or Ethernet Media Access Control) or can be block-code oriented (like fiber channel).
There are two modes of GFP: Generic Frame Procedure-Framed (GFP-F) and Generic Framed Procedure-Transparent (GFP-T). GFP-F maps each client frame into a single GFP frame. GFP-T, on the other hand, allows mapping of multiple 8B/10B block-coded client data streams into an efficient 64B/65B block code for transport within a GFP frame. GFP utilized a length/HEC based frame delineation mechanism that is more robust than that used by High-Level Data Link Control (HDLC), which is single octet flag based.
There are two types of GFP frames: a GFP client frame and a GFP control frame. A GFP client frame can be further classified as either a client data frame or a client management frame. The former is used to transport client data, while the latter is used to transport point-to-point management information like loss of signal, etc. Client management frame can be differentiated from the client data frame based on the payload type indicator. The GFP control frame currently consists only of core header field with no payload area. This frame is used to compensate for the gaps between the4 client signal where the transport medium has higher capacity than the client signal and is better known as an idle frame.
GFP frame consists of
1.A core header, 2.A payload header, 3.An optional extension header, 4.A GFP payload and 5.
An optional payload frame check sequence (FCS).
GFP-F is optimized for bandwidth efficiency at the expense of the latency. It encapsulates complete Ethernet (or other types of frames with a GFP header). GFP-T is used for low latency transport of block-coded client signals such as GbE, Fibre channel, ESCON, FiCON, and Digital Video Broadcast (DVB), In this mode, small group of 8B/10B symbols are transmitted rather than waiting for complete frame data.