ATM: Asynchronous Transfer Mode Protocol

Asynchronous Transfer Mode, or ATM for short, is a cell relay network protocol which encodes data traffic into small fixed sized (53 byte; 48 bytes of data and 5 bytes of header information) cells instead of variable sized packets as in packet-switched networks (such as the Internet Protocol or Ethernet).

ATM was intended to provide a single unified networking standard that could support both synchronous channel networking (PDH, SDH) and packet-based networking (IP, Frame relay, etc), whilst supporting multiple levels of quality of service for packet traffic. ATM sought to resolve the conflict between circuit-switched networks and packet-switched networks by mapping both bitstreams and packet-streams onto a stream of small fixed-size ‘cells’ tagged with virtual circuit identifiers. The cells are typically sent on demand within a synchronous time-slot pattern in a synchronous bit-stream: what is asynchronous here is the sending of the cells, not the low-level bitstream that carries them. In its original conception, ATM was to be the enabling technology of the ‘Broadband Integrated Services Digital Network’ (B-ISDN) that would replace the existing PSTN. The full suite of ATM standards provides definitions for layer 1 (physical connections), layer 2 (data link layer) and layer 3 (network) of the classical OSI seven-layer networking model. The ATM standards drew on concepts from the telecommunications community, rather than the computer networking community. For this reason, extensive provision was made for integration of most existing telco technologies and conventions into ATM. As a result, ATM provides a highly complex technology, with features intended for applications ranging from global telco networks to private local area computer networks. ATM has been a partial success as a technology, with widespread deployment, but generally only used as a transport for IP traffic; its goal of providing a single integrated technology for LANs, public networks, and user services has largely failed.

Numerous telcos have implemented wide-area ATM networks, and many ADSL implementations use ATM. However, ATM has failed to gain wide use as a LAN technology, and its great complexity has held back its full deployment as the single integrating network technology in the way that its inventors originally intended.

Many people, particularly in the Internet protocol-design community, considered this vision to be mistaken. Their argument went something like this: We know that there will always be both brand-new and obsolescent link-layer technologies, particularly in the LAN area, and it is fair to assume that not all of them will fit neatly into the SDH model that ATM was designed for. Therefore, some sort of protocol is needed to provide a unifying layer over both ATM and non-ATM link layers, and ATM itself cannot fill that role. Conveniently, we have this protocol called “IP” which already does that. Ergo, there is no point in implementing ATM at the network layer.

In addition, the need for cells to reduce jitter has disappeared as transport speeds increased (see below), and improvements in voice over IP have made the integration of speech and data possible at the IP layer, again removing the incentive for ubiquitous deployment of ATM. Most telcos are now planning to integrate their voice network activities into their IP networks, rather than vice versa.

Many technically sound ideas from ATM were adopted by MPLS, a generic Layer 2 packet switching protocol. ATM remains widely deployed, and is used as a multiplexing service in DSL networks, where its compromises fit DSL’s low-data-rate needs well (in turn, DSL networks support IP - and IP services such as VoIP - via Point-to-Point Protocol over ATM).

ATM will remain deployed for some time in higher-speed interconnects where carriers have already committed themselves to existing ATM deployments; ATM is used here as a way of unifying PDH/SDH traffic and packet-switched traffic under a single infrastructure.

However, ATM is increasingly challenged by speed and traffic shaping requirements of converged networks. In particular, the complexity of SAR imposes a performance bottleneck, as the fastest SARs known run at 2.5 Gbit/s and have limited traffic shaping capabilities.

Currently it seems like Ethernet implementations (10Gbit-Ethernet, MetroEthernet) will replace ATM in many locations.

Why cells?
The motivation for the use of small data cells was the reduction of jitter (delay variance, in this case) in the multiplexing of data streams; reduction of this (and also end-to-end round-trip delays) is particularly important when carrying voice traffic.

This is because the conversion of digitized voice back into an analog audio signal is an inherently real-time process, and to do a good job, the codec that does this needs an evenly spaced (in time) stream of data items. If the next data item is not available when it is needed, the codec has no choice but to produce silence - and if the data does arrive, but late, it is useless, because the time period when it should have been converted to a signal has already passed.

Now consider a speech signal reduced to packets, and forced to share a link with bursty data traffic (i.e. some of the data packets will be large). No matter how small the speech packets could be made, they would always encounter full-size data packets, and under normal queuing conditions, might experience maximum queuing delays.

At the time ATM was designed, 155 Mbit/s SDH (135 Mbit/s payload) was considered a fast optical network link, and many PDH links in the digital network were considerably slower, ranging from 1.544 to 45 Mbit/s in the USA (2 to 34 Mbit/s in Europe).

At this rate, a typical full-length 1500 byte (12000 bit) data packet would take 89 µs to transmit. In a lower-speed link, such as a 1.544 Mbit/s T1 link, a 1500 byte packet would take up to 7.8 milliseconds.

A queueing delay induced by several such data packets might be several times the figure of 7.8 ms, in addition to any packet generation delay in the shorter speech packet. This was clearly unacceptable for speech traffic, which needs to have low jitter in the data stream being fed into the codec if it is to produce good-quality sound. A packet voice system can produce this in one of two ways:

Have a playback buffer between the network and the codec, one large enough to tide the codec over almost all the jitter in the data. This allows smoothing out the jitter, but the delay introduced by passage through the buffer would be such that echo cancellers would be required even in local networks; this was considered too expensive at the time. Also, it would have also increased the delay across the channel - and human conversational mechanisms tend not to work well with high-delay channels.
Build a system which can inherently provide low jitter (and low overall delay) to traffic which needs it.
ATM operates on 1:1 user basis; it is a dedicated pipe.
The latter was the solution adopted by ATM; however, to be able to provide short queueing delays, but also be able to carry large datagrams, it had to have cells. ATM broke all packets, data and voice streams up into 48 byte chunks, adding a 5 byte routing header to each one so that they could be re-assembled later; it multiplexed these 53 byte cells instead of packets. Doing so reduced the worst-case queuing jitter by a factor of almost 30, removing the need for echo cancellers.

Cells in practice
The rules for segmenting and reassembling packets and streams into cells are known as ATM Adaptation Layers. The most important two are AAL 1, used for streams, and AAL 5, used for most types of packets. Which AAL is in use for a given cell is not encoded in the cell: instead, it is negotiated by or configured at the endpoints on a per-virtual-connection basis.

Since then, networks have become much faster. Now (2001) a 1500 byte (12000 bit) full-size Ethernet packet will take only 1.2 µs to transmit on a 10 Gbit/s optical network, removing the need for small cells to reduce jitter, and some consider that this removes the need for ATM in the network backbone. Additionally, the hardware for implementing the service adaptation for IP packets is expensive at very high speeds. Specifically, the cost of segmentation and re-assembly (SAR) hardware at OC-3 and above speeds makes ATM less competitive for IP than packet over sonet (POS). SAR performance limits mean that the fastest IP router ATM interfaces are OC12 - OC48 (STM4-STM16), while POS can operate at OC-192 (STM 64) (2004) with higher speeds expected in the future.

On slow links (2 Mbit/s and below) ATM still makes sense, and this is why so many ADSL systems use ATM as an intermediate layer between the physical link layer and a Layer 2 protocol like PPP or Ethernet.

At these lower speeds, ATM’s capability to carry multiple logical circuits on a single physical or virtual medium provides a compelling business advantage. DSL can be used as an access method for an ATM network, allowing a DSL termination point in a telephone central office to connect to many internet service providers across a wide area ATM network. In the United States, at least, this has allowed DSL providers to provide DSL access to the customers of many internet service providers. Since one DSL termination point can support multiple ISPs, the economic feasibility of DSL is substantially improved.

Why virtual circuits?
ATM is a channel based transport layer. This is encompassed in the concept of Virtual Paths (VP’s) and Virtual Circuits (VC’s). Every ATM cell has an 8 or 12 bit Virtual Path Identifier (VPI) and 16 bit Virtual Circuit Identifer (VCI) pair defined in its header. The length of the VPI varies according to whether the cell is sent on the user-network interface (on the edge of the network), or if it is sent on the network-network interface (inside the network).

As these cells traverse an ATM network switching is achieved by changing the VPI/VCI values. Although the VPI/VCI values are not necessarily consistent from one end of the connection to the other, the concept of a circuit is consistent (unlike IP where any given packet could get to its destination by a different route to preceding and following packets).

Another advantage of the use of virtual circuits is the ability to use them as a multiplexing layer, allowing different services (such as voice, Frame Relay, n*64 channels, IP, SNA etc.) to share a common ATM connection without interfering with one another.

Using cells and virtual circuits for traffic engineering
Another key ATM concept is that of the traffic contract. When an ATM circuit is set up each switch is informed of the traffic class of the connection.

ATM traffic contracts are part of the mechanism by which “Quality of Service” (QoS) is ensured. There are three basic types (and several variants) which each have a set of parameters describing the connection.

CBR - Constant bit rate: you specify a Peak Cell Rate (PCR) which is what you get
VBR - Variable bit rate: you specify an average cell rate which can peak at a certain level for a maximum time.
ABR - Available bit rate: you specify a minimum rate which is guaranteed
UBR - Unspecified bit rate: you get whatever is left after all other traffic has had its bandwidth
VBR has realtime and non-realtime variants and is used for “bursty” traffic.

Most traffic classes also introduce the concept of Cell Delay Variation Time (CDVT) which defines the “clumping” of cells in time.

Traffic contracts are usually maintained by the use of “Shaping”, a combination of queuing and marking of cells, and enforced by “Policing”.

Traffic shaping
This is usually done at the entry point to an ATM network and attempts to ensure that the cell flow will meet its traffic contract. See separate article for more information.

Traffic policing
To maintain network performance it is possible to police virtual circuits against their traffic contracts. If a circuit is exceeding its traffic contract the network can either drop the cells or mark the Cell Loss Priority (CLP) bit, to identify a cell as discardable further down the line. Basic policing works on a cell by cell basis but this is sub-optimal for encapsulated packet traffic as discarding a single cell will invalidate the whole packet anyway. As a result schemes such as Partial Packet Discard (PPD) and Early Packet Discard (EPD) have been created that will discard a whole series of cells until the next frame starts. This reduces the number of redundant cells in the network saving bandwidth for full frames. EPD and PPD work with AAL5 connections as they use the frame end bit to detect the end of packets.

Types of virtual circuits and paths
Virtual circuits and virtual paths can be built statically or dynamically. Static circuits (permanent virtual circuits or PVCs) or paths (permanent virtual paths or PVPs) require that the provisioner must build the circuit as a series of segments, one for each pair of interfaces through which it passes.

PVPs and PVCs are conceptually simple, but require significant effort in large networks. They also do not support the re-routing of service in the event of a failure. Dynamically built PVPs (soft PVPs or SPVPs) and PVCs (soft PVCs or SPVCs), in contrast, are built by specifying the characteristics of the circuit (the service “contract”) and the two endpoints.

Finally, switched virtual circuits (SVCs) are built and torn down on demand when requested by an end piece of equipment. One application for SVCs is to carry individual telephone calls when a network of telephone switches are inter-connected by ATM. SVCs were also used in attempts to replace local area networks with ATM.

Virtual circuit routing and call admission
Most ATM networks supporting SPVPs, SPVCs, and SVCs use the Private Network Node Interface or Private Network-to-Network Interface (PNNI) protocol. PNNI uses the same shortest path first algorithm used by OSPF and IS-IS to route IP packets to share topology information between switches and select a route through a network. PNNI also includes a very powerful summarization mechanism to allow construction of very large networks, as well as a call admission control (CAC) algorithm that determines whether sufficient bandwidth is available on a proposed route through a network to satisfy the service requirements of a VC or VP.

From Wikipedia article on Asynchronous Transfer Mode

Tags: , , , , , , , , , , , , , , , , , ,

Comments

Thirst for bandwidth drives optical comms to 40Gbit/s

The optical communications network looks set to finally move up to 40Gbit/s data rates in response to demand from residential customers for the greater bandwidth needed for new services.

With residential customers there is a thirst and expectation for bandwidth which is really driving this move from first to second to third gear in the core network” said Peter Collingwood, v-p sales EMEA at JDSU Communications Test. “They expect 2Mbit/s and for online gaming, 10Mbit/s or 20Mbit/s. 40G is being pulled by the demand from us at home.

Collingwood said the firm is already supplying 40Gbit/s equipment for early testing and expects to see it roll out in 12-18 months time.

We’re seeing it re-emerge. Two to three years ago, just when things started to get ugly for the marketplace, 40Gbit/s had started and then somebody put on the handbrake” said Collingwood. “In the last 6-12 months we’ve seen renewed interest in 40Gbit/s. Usually there’s no smoke without fire and when you get requests from the Siemens, Nortels and Lucents of this world then there’s obviously a demand there.

Collingwood joined JDSU, formerly JDS Uniphase, when it acquired test firm Acterna to strengthen and widen its expertise.

JDSU was, fundamentally as a company, designed around optical communications” said Enzo Signore, v-p corporate marketing at JDSU. “We made a number of acquisitions to grow the company in optical communications. The company has gone through significant restructuring to reinvent itself.

Signore said the optical market grew by 20 per cent last year and this year by 10-15 per cent. “We see sustained growth pretty much across the entire optical market.

Source

Tags: , , , , , ,

Comments

Ericsson selected by Telstra to provide a national 3G/WCDMA network with HSDPA

Telstra has chosen Ericsson to provide a national 3G/WCDMA network, based on WCDMA 850 MHz. The network will connect all of Telstra’s mobile customers to one national network, increasing coverage and the selection of services.

Under a memorandum of understanding signed today Ericsson has been appointed to deliver and deploy radio access equipment, core infrastructure and services in support of Telstra including design, installation, integration and project management. The parties will finalise their commercial arrangements as soon as possible.

Håkan Eriksson, Chief Technology Officer for Ericsson, said at a media and analyst briefing, that the migration to one national network would help deliver the best possible services and performance to Telstra customers across the country.

“With one national network in Australia there is greater freedom for local and international roaming, stronger community value and greater choice of handsets and services,” said Mr. Eriksson.

The national 3G network will offer all Australians equivalent access to higher-speed information and access to exciting, valuable new services and experiences, such as telemedicine, distance education, public safety and entertainment video mobile services.

The new network will more than triple the choice of handsets for rural customers.
WCDMA is the dominant 3G technology selected by 8 of the world’s 10 largest operators. Ericsson expects that by 2007, as much as 80 percent of all mobile subscribers will be served by the GSM/ EDGE/WCDMA family, in which Ericsson holds a leading position with a 35 percent market share.
The WCDMA technology path represents a natural and fully standardized evolution of WCDMA for Telstra, with the introduction of HSDPA (High Speed Downlink Packet Access). HSDPA can provide peak data rates of up to 14 Mbit/s and is the enabler for Mobile Broadband and Mobile TV services to become mass-market.

The new national network builds on the existing agreement where Ericsson is upgrading Telstra’s GSM network to 3G/WCDMA, supplying softswitch core and radio access network equipment, including HSDPA and network rollout services.

Source

Tags: , , , , , , , ,

Comments

Huawei launches 40Gbps DWDM transmission system

Huawei Technologies Co. Ltd recently announced that its 40Gbps per wavelength Dense Wavelength Division Multiplexing (DWDM) transmission system is now commercialized for global telecom carriers. The new DWDM provision enables operators to transmit larger capacity and long-haul DWDM signals under sound security and quality and flexibility.

The DWDM transmission system is now present in the companys OptiX BWS 1600G and OptiX BWS 1600A DWDM solutions. Huaweis OptiX BWS 1600G and OptiX BWS 1600A DWDM solutions now allow both 10Gbps and 40Gbps systems to co-exist and cooperate within the same equipment. The new configuration not only meets the carriers 40Gbps bandwidth demand, but also delivers a smooth evolution to meet various traffic demands and resolve technical problems.

As of September 2005, Huawei has already completed building over 300,000km of DWDM backbones around the world.

Source

Tags: , , , , , , , , ,

Comments

Macedonia to host world’s largest WLAN

The former Yugoslav Republic of Macedonia is to host the world’s largest WLAN. The wireless mesh network will cover the entire country. Within a year more than 90 per cent of the population will be able to go online and make telephone calls via the WLAN from their home.

The mesh network, supplied by US company Strix Systems, is the first nationwide wireless broadband system in the world. Although there are only 2 million people living in the former Yugoslav Republic, which was spared the inter-ethnic violence in the Balkans of the 1990s, the deployment will still be a huge task. The country is known for its mountainous terrain and deep valleys. For this reason, Strix will run separate radio channels for the mesh backbone and client access.

Strix has already deployed a WLAN for the Macedonian capital Skopje, where about one million people live. All of the country’s 460 schools already have broadband connections. How many people in Macedonia can afford to buy a PC and go online is not known. In 2003 only 40 per cent of the population owned a computer.

From an article in the Register.

Tags: , , , , , , , , , ,

Comments

« Previous entries · Next entries »