SONET SDH Demystified

Free download. Book file PDF easily for everyone and every device. You can download and read online SONET SDH Demystified file PDF Book only if you are registered here. And also you can download or read online all Book PDF file that related with SONET SDH Demystified book. Happy reading SONET SDH Demystified Bookeveryone. Download file Free Book PDF SONET SDH Demystified at Complete PDF Library. This Book have some digital formats such us :paperbook, ebook, kindle, epub, fb2 and another formats. Here is The CompletePDF Book Library. It's free to register here to get Book file PDF SONET SDH Demystified Pocket Guide.

SONET/SDH Demystified

Embeds 0 No embeds. No notes for slide. They correspond to both the physical and the data linkThey correspond to both the physical and the data link layers. SONET defines four layers: path, line, section, and photonic.

Catalog Record: Broadband networking : ATM, SDH, and SONET | HathiTrust Digital Library

Note 8. Figure The signal is change from an electronic form to into optical form. Multiplexed with other signal. Encapsulated in a frame. STS multiplexers provide path layer functions. Line layer overhead is added to the frame at this layer. It handles framing, scrambling and error control.

  • Professional JavaScript for Web Developers (Wrox Professional Guides).
  • Product description?

Section layer overhead is added to the frame at this layer. It includes physical specification for the optical fiber channel.

  • Sober Living for the Revolution: Hardcore Punk, Straight Edge, and Radical Politics.
  • Fossil.
  • Lymphedema: A Concise Compendium of Theory and Practice?

Each frame is a two-dimensionalof frames. Note Each channel is sampled in turn, every one eight-thousandth of a second in round-robin fashion, resulting in the generation of 8, pulse amplitude samples from each channel every second. The sampling rate is important. If the sampling rate is too high, too much information is transmitted and bandwidth is wasted; if the sampling rate is too low, then we run the risk of aliasing. Aliasing is the interpretation of the sample points as a false waveform, due to the paucity of samples.

Demystifying Optical Ethernet Networks

This Pulse Amplitude Modulation process represents the first stage of Pulse Code Modulation, the process by which an analog baseband signal is converted to a digital signal for transmission across the T-Carrier network. Figure shows this first step. The second stage of PCM, shown in Figure , is called quantization. In quantization, we assign values to each sample within a constrained range. Figure Time division multiplexing. Chapter 1 Beginnings 21 For illustration purposes, imagine what we now have before us.

We have replaced the continuous analog waveform of the signal with a series of amplitude samples that are close enough together that we can discern the shape of the original wave from their collective amplitudes. Imagine also that we have graphed these samples in such a way that the wave of sample points meanders above and below an established zero point on the x-axis, so that some of the samples have positive values and others are negative.

The amplitude levels enable us to assign values to each of the PAM samples, although a glaring problem with this technique should be obvious to the careful reader. Very few of the samples actually line up exactly with the amplitudes delineated by the graphing process. In fact, most of them fall between the values, as shown in the illustration. This inaccuracy in the measurement method results in a problem known as quantizing noise and is inevitable when linear measurement systems, such as the one suggested by the drawing, are employed in coder-decoders CODECs.

Works under MDS 621.38275

Needless to say, design engineers recognized this problem rather quickly, and came up with an adequate solution just as quickly. It is a fairly wellknown fact among psycholinguists and speech therapists that the human ear is far more sensitive to discrete changes in amplitude at low-volume levels than it is at high-volume levels, a fact not missed by the network designers tasked with optimizing the performance of digital carrier systems intended for voice transport.

Instead of using a linear scale for digitally encoding the PAM samples, they designed and employed a nonlinear scale that is weighted with much more granularity at low-volume levels that is, close to the zero line than at the higher amplitude levels. In other words, the values are extremely close together near the x-axis, and become farther and farther apart as they travel up and down the y-axis.

This nonlinear approach keeps the quantizing noise to a minimum at the low amplitude levels where hearing sensitivity is the highest, and enables it to creep up at the higher amplitudes, where the human ear is less sensitive to its presence. It turns out that this is not a problem because the inherent shortcomings of the mechanical equipment microphones, speakers, the circuit itself introduce slight distortions at high amplitude levels that hide the effect of the nonlinear quantizing scale.

This technique of compressing the values of the PAM samples to make them fit the nonlinear quantizing scale results in bandwidth savings of more than 30 percent. The actual process is called companding because the sample is first compressed for transmission, then expanded for reception at the far end.

Chapter 1 22 The actual graph scale is divided into distinct values above and below the zero line. Eight segments are above the line and eight are below one of which is the shared zero point ; each segment, in turn, is subdivided into 16 steps. A bit of binary mathematics now enables us to convert the quantized amplitude samples into an eight-bit value for transmission. The conversion would take on the following representation 1 where the initial 0 indicates a negative sample, indicates the fifth segment, and indicates the thirteenth step in the segment.

We now have an eight-bit representation of an analog amplitude sample that can be transmitted across a digital network, then reconstructed with its many counterparts as an accurate representation of the original analog waveform at the receiving end. This entire process is known as Pulse Code Modulation PCM and the result of its efforts is often referred to as toll-quality voice.

SONET/SDH Demystified Summary

Alternative Digitization Techniques Although PCM is perhaps the best-known, high-quality voice digitization process, it is by no means the only one. Advances in coding schemes and improvements in the overall quality of the telephone network have made it possible to develop encoding schemes that use far less bandwidth than traditional PCM. In this next section, we will consider some of these techniques. ADPCM relies on the predictability that is inherent in human speech to reduce the amount of information required. The technique still relies on PCM encoding, but adds an additional step to carry out its task.

The 64 Kbps PCM-encoded signal is fed into an ADPCM transcoder, which considers the previous behavior of the incoming stream to create a prediction of the behavior of the next sample. This is where the magic happens: instead of transmitting the actual value of the predicted sample, it encodes in four Beginnings 23 bits and transmits the difference between the actual and predicted samples. Because the difference from sample to sample is typically quite small, the results are generally considered to be very close to toll-quality.

This fourbit transcoding process, which is based on the known behavior characteristics of human voice, enables the system to transmit 8, four-bit samples per second, thus reducing the overall bandwidth requirement from 64 Kbps to 32 Kbps. It should be noted that ADPCM works well for voice because the encoding and predictive algorithms are based upon its behavior characteristics.

It does not, however, work as well for higher bit rate data above 4, bps , which has an entirely different set of behavior characteristics. Instead of transmitting the volume height or y-value of PAM samples, CVSD transmits information that it measures the changing slope of the waveform. Rather than transmitting the actual change itself, it transmits the rate of change, as shown in Figure To perform its task, CVSD uses a reference voltage to which it compares all incoming values. If the incoming signal value is less than the reference voltage, then the CVSD encoder reduces the slope of the curve to make its approximation better mirror the slope of the actual signal.

With each recurring sample and comparison, the step function can be increased or decreased as required. For example, if the signal is increasing rapidly, then the steps are increased one after the other in a form of step function by the encoding algorithm. Obviously, the reproduced signal is not a particularly exact representation of the input signal: in practice, it is pretty jagged. Filters, therefore, are used to smooth the transitions. CVSD is typically implemented at 32 Kbps, although it can be implemented at rates as low as 9, bps.

At 16—24 Kbps, recognizability is still possible; down to 9,, recognizability is seriously affected, although intelligibility is not. Linear Predictive Coding LPC We mention Linear Predictive Coding LPC here only because it has carved out a niche for itself in certain voice-related applications such as voice mail systems, automobiles, aviation, and electronic games that speak to children.

LPC is a complex process, implemented completely in silicon, which enables voice to be encoded at rates as low as 2, bps. The resulting quality is far from toll-quality, but it is certainly intelligible and its low-bit rate capability gives it a distinct advantage over other systems.

Linear Predictive Coding relies on the fact that each sound created by the human voice has unique attributes, such as frequency range, resonance, and loudness, among others. When voice samples are created in LPC, these attributes are used to generate prediction coefficients. These predictive coefficients represent linear combinations of previous samples, hence the name, Linear Predictive Coding.

Prediction coefficients are created by taking advantage of the known formants of speech, which are the resonant characteristics of the mouth and throat that give speech its characteristic timbre and sound.

Stanford Libraries

This sound, referred to by speech pathologists as the buzz, can be described by both its pitch and its intensity. LPC, therefore, models the behavior of the vocal cords and the vocal tract itself. To create the digitized voice samples, the buzz is passed through an inverse filter that is selected based upon the value of the coefficients. The remaining signal, after the buzz has been removed, is called the residue. In the most common form of LPC, the residue is encoded as either a voiced or unvoiced sound.

Voiced sounds are those that require vocal cord vibration, such as the g in glare, the b in boy, the d and g in dog. Unvoiced sounds require no vocal cord vibration, such as the h in how, the sh in shoe, Beginnings 25 and the f in frog. The transmitter creates and sends the prediction coefficients, which include measures of pitch, intensity, and whatever voiced and unvoiced coefficients that are required.

The receiver undoes the process; it converts the voice residue, pitch, and intensity coefficients into a representation of the source signal, using a filter similar to the one used by the transmitter to synthesize the original signal. Digital Speech Interpolation DSI Human speech has many measurable and therefore predictable characteristics, one of which is a tendency to have embedded pauses. As a rule, people do not spew out a series of uninterrupted sounds; they tend to pause for emphasis, to collect their thoughts, and to reword a phrase while the other person listens quietly on the other end of the line.

When speech technicians monitor these pauses, they discover that during considerably more than half of the total connect time, the line is silent. Digital Speech Interpolation DSI takes advantage of this characteristic silence to drastically reduce the bandwidth required for a single channel.

Whereas 24 channels can be transported over a typical T-1 facility, DSI enables as many as conversations to be carried over the same circuit. The format is proprietary and requires the setting aside of a certain amount of bandwidth for overhead. Standard T-Carrier is a time-division multiplexed scheme, in which channel ownership is assured: a user assigned to channel three will always own channel three, regardless of whether he or she is actually using the line.