Welcome to Home.

Saturday, February 13, 2016

PC Fundamentals


PC Fundamentals
While working on rewriting some sections of the site, I realized that I was repeating myself. Not only was I repeating myself, I was being redundant. And repetitive. :^) There are many different fundamental basics of computing that are important to know when reading about PCs, but aren't specific to any components. Rather than writing the same background material over and over, and rather than boring people who already understand these basics of computing, I have created this section on PC fundamentals, into which I will place various pages describing different topics that are common to various aspects of computing. I will also describe here simple components and technologies that are commonly found in various parts of the PC.


Binary vs. Decimal Measurements
One of the most confusing problems regarding PC statistics and measurements is the fact that the computing world has two different definitions for most of its measurement terms. :^) Capacity measurements are usually expressed in kilobytes (thousands of bytes), in megabytes (millions of bytes), or gigabytes (billions of bytes). Due to a mathematical coincidence, however, there are two different meanings for each of these measures.
Computers are digital and store data using binary numbers, or powers of two, while humans normally use decimal numbers, expressed as powers of ten. As it turns out, two to the tenth power, 2^10, is 1,024, which is very close in value to 1,000 (10^3).  Similarly, 2^20 is 1,048,576, which is approximately 1,000,000 (10^6), and 2^30 is 1,073,741,824, close to 1,000,000,000 (10^9). When computers and binary numbers first began to be used regularly, computer scientists noticed this similarity, and for convenience, "hijacked" the abbreviations normally used for decimal numbers and began applying them to binary numbers. Thus, 2^10 was given the prefix "kilo", 2^20 was called "mega", and 2^30 "giga".
This shorthand worked fairly well when used only by technicians who worked regularly with computers; they knew what they were talking about, and nobody else really cared. Over the years however, computers have become mainstream, and the dual notation has led to quite a bit of confusion and inconsistency. In many areas of the PC, only binary measures are used. For example, "64 MB of system RAM" always means 64 times 1,048,576 bytes of RAM, never 64,000,000. In other areas, only decimal measures are found--a "28.8K modem" works at a maximum speed of 28,800 bits per second, not 29,491.
Storage devices however are where the real confusion comes in. Some companies and software packages use binary megabytes and gigabytes, and some use decimal megabytes and gigabytes. What's worse is that the percentage discrepancy between the decimal and binary measures increases as the numbers get larger: there is only a 2.4% difference between a decimal and a binary kilobyte, which isn't that big of a deal. However, this increases to around a 5% difference for megabytes, and around 7.5% for gigabytes, which is actually fairly significant. This is why with today's larger hard disks, more people are starting to notice the difference between the two measures. Hard disk capacities are always stated in decimal gigabytes, while most software uses binary. So, someone will buy a "30 GB hard disk", partition and format it, and then be told by Windows that the disk is "27.94 gigabytes" and wonder "where the other 2 gigabytes went". Well, the disk is 27.94 gigabytes--27.94 binary gigabytes. The 2 gigabytes didn't go anywhere.
Another thing to be careful of is converting between binary gigabytes and binary megabytes. Decimal gigabytes and megabytes differ by a factor of 1,000 but of course the binary measures differ by 1,024. So this same 30 GB hard disk is 30,000 MB in decimal terms. But its 27.94 binary gigabytes are equal to 28,610 binary megabytes (27.94 times 1,024).
Windows 95 display of the capacity of an 8 GB hard disk drive.
Note the difference between the number of bytes and the "GB"
values, which are clearly given as binary measures.
One final "gotcha" in this area is related to arithmetic done between units that have different definitions of "mega" or "giga". For example: most people would say that the PCI bus has a maximum theoretical bandwidth of 133.3 Mbytes/second, because it is 4 bytes wide and runs at 33.3 MHz. The problem here is that the "M" in "MHz" is 1,000,000; but the "M" in "Mbytes/second" is 1,048,576. So the bandwidth of the PCI bus is more properly stated as 127.2 Mbytes/second (4 times 33,333,333 divided by 1,048,576).
There's potential good news regarding this whole binary/decimal conundrum. The IEEE has proposed a new naming convention for the binary numbers, to hopefully eliminate some of the confusion. Under this proposal, for binary numbers the third and fourth letters in the prefix are changed to "bi", so "mega" becomes "mebi" for example. Thus, one megabyte would be 10^6 bytes, but onemebibyte would be 2^20 bytes. The abbreviation would become "1 MiB" instead of "1 MB". "Mebibyte" sounds goofy, but hey, I'm sure "byte" did too, 30 years ago. ;^) Here's a summary table showing the decimal and binary measurements and their abbreviations and values ("bytes" are shown as an example unit here, but the prefices could apply to any unit of measure):
Decimal Name
Decimal Abbr.
Decimal Power
Decimal Value
Binary Name
Binary Abbr.
Binary Power
Binary Value
Kilobyte
kB
10^3
1,000
Kibibyte
kiB
2^10
1,024
Megabyte
MB
10^6
1,000,000
Mebibyte
MiB
2^20
1,048,576
Gigabyte
GB
10^9
1,000,000,000
Gibibyte
GiB
2^30
1,073,741,824
Terabyte
TB
10^12
1,000,000,000,000
Tebibyte
TiB
2^40
1,099,511,627,776
Only time will tell if this standard, which you can read about here, will catch on--old habits die hard. I for one will be doing my share though. As I update various portions of the site, I will be changing places where I used terms such as "kB" and "MB" for binary numbers into "kiB" and "MiB". This may be confusing at first but I think we'll get used to it, and at least it will eliminate the current ambiguity.


Basic Electrical Components
There are several important basic electrical components that are commonly found in the circuits of virtually all PC parts and peripherals. These devices are the fundamental building blocks of electrical and electronic circuits, and can be found in great numbers on motherboards, hard disk logic boards, video cards and just about everywhere else in the PC, including places that might surprise you. They can be used and combined with each other and dozens of other devices, in so many different ways that I could not even begin to describe them all. Still, it is useful to know a bit about how they work, and this page will at least provide you with a basis for recognizing some of what you see on those boards, and perhaps understanding the fundamentals of circuit schematics. Bear in mind when reading the descriptions below that it would really take several full pages to fully describe the workings of most of these components! Fortunately, this level of detail isn't really necessary to provide the background necessary when working with PCs.
For each component, I provide a sample photo, as well as an illustration of the component's symbol in an electrical schematic (diagram showing how a circuit is designed). There are many variants of each of the components shown below; so the diagrams should only be considered examples.
  • Battery: A direct current electricity source of a specific voltage, used primarily in small circuits.
     
A battery (in this case, a button cell on a PC motherboard.)
Original photo © Kamco Services
Image used with permission.
  • Resistor: As you could probably guess from the name, a resistor increases the resistance of a circuit. The main purpose of this is to reduce the flow of electricity in a circuit. Resistors come in all different shapes and sizes. They dissipate heat as a result of their opposing electricity, and are therefore rated both in terms of their resistance (how much they oppose the flow of electrons) and their power capacity (how much power they can dissipate before becoming damaged.) Generally, bigger resistors can handle more power. There are also variable resistors, which can have their resistance adjusted by turning a knob or other device. These are sometimes called potentiometers.
     
Magnified surface-mount resistor from a motherboard.
These small resistors are now much more common on PC
electronics than the older, larger pin type.
Note the "R10" designation.
  • Capacitor: A capacitor is a component made from two (or two sets of) conductive plates with an insulator between them. The insulator prevents the plates from touching. When a DC current is applied across a capacitor, positive charge builds on one plate (or set of plates) and negative charge builds on the other. The charge will remain until the capacitor is discharged. When an AC current is applied across the capacitor, it will charge one set of plates positive and the other negative during the part of the cycle when the voltage is positive; when the voltage goes negative in the second half of the cycle, the capacitor will release what it previously charged, and then charge the opposite way. This then repeats for each cycle. Since it has the opposite charge stored in it each time the voltage changes, it tends to oppose the change in voltage. As you can tell then, if you apply a mixed DC and AC signal across a capacitor, the capacitor will tend to block the DC and let the AC flow through. The strength of a capacitor is called capacitance and is measured in farads (F). (In practical terms, usually microfarads and the like, since one farad would be a very large capacitor!) They are used in all sorts of electronic circuits, especially combined with resistors and inductors, and are commonly found in PCs.
     
Three capacitors on a motherboard.
The two large capacitors in the background are 1500 microfarads
and 2200 microfarads respectively, as you can clearly see from
their labeling. The small silver-colored capacitor in the foreground is
a 22 microfarad electrolytic capactor. Electrolytics are commonly used in
computers because they pack a relatively high capacitance into a small
package. The plus sign indicates the polarity of the capacitor, which also has its
leads  marked with "+" and "-". If you look closely you can see the "+" marking
on the motherboard, just to the left of the capacitor. Note that very small
capacitors are also found in surface-mount packages just like the resistor above.
  • Inductor: An inductor is essentially a coil of wire. When current flows through an inductor, a magnetic field is created, and the inductor will store this magnetic energy until it is released. In some ways, an inductor is the opposite of a capacitor. While a capacitor stores voltage as electrical energy, an inductor stores current as magnetic energy. Thus, a capacitor opposes a change in the voltage of a circuit, while an inductor opposes a change in its current. Therefore, capacitors block DC current and let AC current pass, while inductors do the opposite. The strength of an inductor is called--take a wild guess--itsinductance, and is measured in henrys (H). Inductors can have a core of air in the middle of their coils, or a ferrous (iron) core. Being a magnetic material, the iron core increases the inductance value, which is also affected by the material used in the wire, and the number of turns in the coil. Some inductor cores are straight in shape, and others are closed circles calledtoroids. The latter type of inductor is highly efficient because the closed shape is conducive to creating a stronger magnetic field. Inductors are used in all sorts of electronic circuits, particularly in combination with resistors and capacitors, and are commonly found in PCs.
     
A toroidal core inductor from a PC motherboard.
The two bars in the symbol represent the iron core;
an air-core inductor would not have the bars.
Note that very small inductors are also found in
surface-mount packages just like the resistor above.
  • Transformer: A transformer is an inductor, usually with an iron core, that has two lengths of wire wrapped around it instead of one. The two coils of wire do not electrically connect, and are normally attached to different circuits. One of the most important components in the world of power, it is used to change one AC voltage into another. As described above, when a coil has a current passed through it, a magnetic field is set up proportional to the number of turns in the coil. This principle also works in reverse: if you create a magnetic field in a coil, a current will be induced in it, proportional to the number of turns of the coil. Thus, if you create a transformer with say, 100 turns in the first or primary coil, and 50 turns in the second orsecondary coil, and you apply 240 VAC to the first coil, a current of 120 VAC will be induced in the second coil (approximately; some energy is always lost during the transformation). A transformer with more turns in its primary than its secondary coil will reduce voltage and is called a step-down transformer. One with more turns in the secondary than the primary is called a step-up transformer. Transformers are one of the main reasons we use AC electricity in our homes and not DC: DC voltages cannot be changed using transformers. They come in sizes ranging from small ones an inch across, to large ones that weigh hundreds of pounds or more, depending on the voltage and current they must handle.
     
A transformer from the interior of a PC power supply.
Note the large heat sink fins above and below it.
  • Diode / LED: A diode is a device, typically made from semiconductor material, that restricts the flow of current in a circuit to only one direction; it will block the bulk of any current that tries to go "against the flow" in a wire. Diodes have a multitude of uses. For example, they are often used in circuits that convert alternating current to direct current, since they can block half the alternating current from passing through. A variant of the common diode is the light-emitting diode or LED; these are the most well-known and commonly-encountered kind of diode, since they are used on everything from keyboards to hard disks to television remote controls. An LED is a diode that is designed to emit light of a particular frequency when current is applied to it. They are very useful as status indicators in computers and battery-operated electronics; they can be left on for hours or days at a time because they run on DC, require little power to operate, generate very little heat and last for many years even if run continuously. They are now even being made into low-powered, long-operating flashlights.
A diode (top) and a light-emitting diode (bottom). Note the
symbol on the circuit board above the diode, and the "CR3"
designation. The LED shown is an older, large diode from a
system case. LEDs are now more often round and usually smaller.
  • Fuse: A fuse is a device designed to protect other components from accidental damage due to excessive current flowing through them. Each type of fuse is designed for a specific amount of current. As long as the current in the circuit is kept below this value, the fuse passes the current with little opposition. If the current rises above the rating of the fuse--due to a malfunction of some sort or an accidental short-circuit--the fuse will "blow" and disconnect the circuit. Fuses are the "heroes" of the electronics world, literally burning up or melting from the high current, causing a physical gap in the circuit and saving other devices from the high current. They can then be replaced when the problem condition has been corrected.  All fuses are rated in amps for the amount of current they can tolerate before blowing; they are also rated for the maximum voltage they can tolerate. Always replace a blown fuse only with another of the same current and voltage rating.
     
A fuse, sitting in its fuse holder,
from the interior of a PC power supply.

Jumpers
Jumpers are small devices that are used to control the operation of hardware devices directly, without the use of software. They have been around since the very first PCs, and are still used on many types of modern hardware today. A jumper consists of two primary components:
  • Jumper: The jumper itself is a small piece of plastic and metal that is placed across two jumper pins to make a connection, or removed to break a connection. They come in a few standard sizes (and some non-standard ones I'm sure); only one or two sizes are commonly seen on PCs. Jumpers are sometimes also called shunts.
  • Jumper Pins: A set of pins, across two of which a jumper is placed to make a specific connection.
Note: Some people actually call the jumper pins the "jumper"; others call the pins plus the jumper a "jumper". The terms are used rather loosely, but it's nothing to worry about.
A jumper is a mechanical switch that is easily modified by hand. Essentially, it's a circuit that has been broken intentionally and a pin placed on each end of the broken connection. Placing a jumper across two pins connects them electrically, completing the circuit; removing a jumper from a set of pins breaks the circuit. Hardware engineers allow users to configure devices or change their operation by creating different sets of pins that implement different functions depending on how the jumpers are set. When power is applied to the device it detects which circuits have been closed or opened. The most common place where most folks see jumpers are in hard disk drives and motherboards. On hard disks they are typically used to tell the hard disk what role to play on the hard disk interface cable; on motherboards they control as many as a dozen different settings related to how the motherboard functions. Usually these jumper settings are printed directly on the hardware for convenience.
The main advantage of using jumpers for controlling hardware is that they are simple and straightforward: if you get the settings correct, the hardware (assuming it is not defective) will perform as it should. What you see is what you get. The biggest disadvantage associated with using jumpers is the fact that they require physical manipulation. If you need to change a jumper, you have to physically open the PC to access the device, and that's not always easy to do. The jumpers are also very small and easily lost if you are not careful. Also, you have to make these changes with the power off. These issues are one reason why the effort was made a few years ago to move away from jumpers on hardware devices and towards software configuration of hardware using techniques such as Plug and Play.
Jumpers are given many different designations. On motherboards, it is common to see them numbered, using a sequence such as "JP1", "JP2", etc. For some functions, jumpers are treated as a group--multiple jumpers must be placed on particular sets of pins to enable or disable a specific function. The documentation that comes with any hardware device should tell you how to set its jumpers to control various functions; if you don't have the documentation, check the manufacturer's web site.
A group of jumper pins on a motherboard, showing three jumpers
connected and two sets of pins "bare" (no jumpers attached).
Note the "JP7" and "JP15" labels in the foreground.
Tip: One problem experienced by many who work with hardware occurs when a jumper needs to be removed from a set of two pins to disable a function: what do you do with it? Some keep a big "box o' miscellaneous hardware" for jumpers and similar small items, but since the jumpers are small and easy to lose, one trick that is often used is to "dangle" the shunt by connecting it to only one pin. Since the second half of the shunt is disconnected, this is electrically equivalent to removing it altogether, and ensures that it will always be there for you the next time you need it.
A jumper "dangled" from a set of pins. This is electrically
equivalent to removing the jumper entirely.

Signaling, Clocks and Synchronous Data Transfer
Your PC performs millions of operations every second. Not only is the main system processor executing instructions, data is being transferred from the system memory, storage devices are being read and written, input and output devices are sending and receiving information... There's quite a bit going on inside that box, a veritable factory of busy little workers, moving data around. :^)
Obviously, all of this activity must be coordinated and managed. The PC's internal circuits use a special system of communications to control all of these activities and ensure that the various parts of the computer are "on the same page of the play-book", so to speak. These communications are performed using special control signals, as well as clock signals that synchronize components and set the pace for most internal operations.
In order to grasp the operation of many of the technologies within the PC, it's important to understand at least the basics of how these signals and clocks work. The use of signals and clocks is common to many different areas of the PC's operation, from the CPU to the system memory, to video and storage interfaces.
In this section I provide an introduction and high-level explanation of how signaling and clocks work within the PC. I begin by discussing basic concepts related to voltage levels and signaling within PC circuits. I then explain clock signals and how they work, and also describe how multiple clocks are used within the PC. Then, I describe the way that the clock is used to control data transfer across buses and interfaces.

Voltage Levels and Signaling
As explained in this introductory section, PCs are all about information: taking it as input, processing it and producing output. Information is represented within computers using binary, digital logic: ones and zeros. Each binary digit or bit represents one piece of information, where a bit being a "one" means a particular thing, and it being a "zero" means another.
Of course, computers are electronic devices, and as such, they deal in electrical terms. While it is useful for us to think of data bits as being ones and zeros, this is in fact an abstraction. Within the PC's circuits, a one and a zero must be represented in some sort of physical manner. In fact, different components represent bits in totally different ways. On a hard disk, ones and zeros are encoded magnetically; on an optical disk, by a sequence of pits and lands. And within the core operating circuitry of the PC, ones and zeros are represented by voltage levels.
Voltage represents an electrical potential; the ability to do work (see this section on power basics if you want to understand voltage better). Within the PC's circuits, a "one" is generally represented as a positive voltage, while a "zero" is represented as a zero voltage. It doesn't really matter what the positive voltage is, as long as all the circuits agree on what to use, and as long as the positive voltage is sufficiently high that there's virtually no risk of a "one" being seen as a "zero" (or vice-versa). These voltages are used wherever data is being actively manipulated or transferred within the PC.
Data is not static; rather, it is dynamic, changing over time. As your computer works, it executes millions of instructions every second, and all during this time, a coordinated "ballet" of data is flowing around all of the circuits in the computer. Each component works with a number of different pieces of information, that each have a zero or one value at any given time. These are called signals. A signal can be any type of information: it can be data, or address information, or control information, and so on.
The hardware in the PC is designed to operate by looking for particular patterns in the different signals it uses, and then responding to them. For example, a memory chip may recognize a request to read a particular piece of data by looking for a particular value on a control line; when it senses that value, it looks at other signals for the address of the data to be read, and then responds by producing the requested data on a different set of signal lines.
Over time, the value of any signal will vary frequently between one and zero (except for ground and power signals, which are always zero or one, respectively). There are two different ways that a hardware device can respond to a signal:
  • Value (Level-Triggered) Activation: The device looks for a particular level on the signal, either a zero or a one. When it sees the appropriate level, it takes a particular action.
  • Transition (Edge-Triggered) Activation: The device does not look for the level of the signal at all, but rather the transitionfrom one level to another. The transition from a zero to a one is called the rising edge, and the transition from one to zero thefalling edge. In some cases a response will be triggered on only one or the other of these transitions while the other is ignored; in some cases a response will occur for both, either the same response or a different one for the rising edge and a different one for the falling edge.
A signal, of no particular consequence or pattern.
The value of this signal begins as zero, then goes to one for a while, back
to zero, then one, zero and then one again. The period when the signal is
zero is indicated by the blue color; one is magenta. Rising edges are
shown in green, and falling edges in red. Note that I am pretty sure
that within the PC's circuits, signals aren't nearly this colorful. :^)
(And more seriously, real signals in a PC are not this clean and "perfect".)
All signals within the PC have a name associated with them. For example, the third data line in an interface might be called "DATA2" (counting always starts with zero, remember!) For level-activated signals, a special notation is sometimes used. You may find a signal that has a solid line over the top of it, such as this: RESET. Since it's not easy to write that in regular text mode (that's an embedded graphic), you may instead see an approximation, which is the name of the signal preceded or followed by a dash or hyphen: /RESET, RESET/, -RESET or RESET- (or, rarely, something strange like a pound sign, "RESET#"). All of these notations represent a signal whose logic is inverted from the normal sense: the signal is considered "activated" or "true" when it is zero, and off when it is one. So this means that this RESET signal is normally a one; when it drops to zero a reset occurs. Such a signal is said to be active low orlow active. Of course, a regular signal is active high or high active. A signal that changes from one to zero and back again very fast is called hyperactive. (OK, OK, I threw in that last one. ;^) )
Some actions within the PC, especially time-critical ones, are easier to have activated based on transitions and not levels. The reason is that transitions are often easier to detect, and more importantly, they happen quickly, which allows for synchronized activity within the PC. Levels are more often used for data and regular control signals, and transitions for clock signals.


Clock Signals, Cycle Time and Frequency
As I mentioned in the previous page of this section, the dimension of change in the operation of signals is time. Signals change as time progresses, and this is what enables the flow of data, in fact, everything that happens within a PC. With so many circuits within a computer, it is necessary for some sort of synchronization to occur. Otherwise, the PC would be a like a symphony without a conductor.
The "conductor" of the PC is the system clock. A clock is just a signal that alternates between zero and one, back and forth, at a specific pace. In many ways, it is just like a metronome, going back and forth over and over. The clock sets the "pace" for everything that happens within a particular electronic circuit.
There are a few important terms and attributes related to clock signals, which you will occasionally hear mentioned:
  • Cycle: This refers to a single complete traversal of the signal, from the rising edge, through the time when the value of the clock is one, through the falling edge, the time that the value is zero, until the start of the next rising edge. (You can actually "chop" the signal wherever you want and have it be a single cycle, as long as you only cover one cycle without "overlapping").
  • Cycle Time: The amount of time required for the signal to traverse one complete cycle. For fast PC circuits, cycle time is often specified in ns (nanoseconds, or billionths of a second). Sometimes also called cycle length or similar names.
  • Rise Time and Fall Time: In theory, the transition from a one to a zero or vice-versa is instantaneous. In practice, nothing is instantaneous, and the rise time and fall time measure how long it takes for the level to change from zero to one, or one to zero, respectively.
  • Clock Frequency: This is also sometimes called the clock rate or clock speed. It is simply the reciprocal of the cycle time, and is therefore the number of cycles that occur each second (as opposed to the number of seconds per cycle). It is usually measured in MHz or GHz, where "Hz" is the abbreviation for Hertz, the standard SI unit for measuring frequency. One Hertz is one cycle per second. So for example, if a clock's cycle time is 1.25 ns, its frequency is 1/(0.00000000125) = 800,000,000 Hertz, or 800 MHz.
A clock signal.
This diagram shows just under four complete cycles of a typical clock signal.
The span of a single cycle is shown, along with the time periods that represent
rise time and fall time. (The scale of the time axis would be needed to know what
the actual cycle time, and hence frequency, of this example clock signal is.)
Where do these clock signals come from in the first place? That's a very good question. In fact, they are generated the same way that a digital watch (or any electronic timepiece) keeps time. A special circuit called an oscillator supplies a small amount of electricity to a crystal. Crystals are special components made out of components such as quartz, which vibrate at a particular frequency when energized. By controlling the characteristics of the crystal and the rest of the circuit, the specific speed of the clock can be determined fairly precisely. Some oscillator circuits also provide additional components to allow the same crystal to generate a variety of different clock speeds, perhaps even under software control.
Note: In some contexts, a clock signal may be called a strobe or other similar name.



Derived System Clocks
The earliest PCs had just a single system clock; everything from the CPU and the memory to the system bus and peripherals ran at the same speed. Today, PC components are much more specialized, and some circuits and components operate much faster than others. For this reason, there is not just one system clock within the PC, but several. For example, a different system clock speed is typically used for the CPU, for the chipset and memory circuitry, and even for each of the various system buses.
One solution to providing different system clocks would be to incorporate multiple oscillator circuits into the PC. This is typically not done for a couple of reasons. First of all, it would be expensive. More importantly, however, the different clock signals would tend to get out of synchronization with each other. It is possible to synchronize different clock signals, but there's a different way to create multiple clock signals that is easier: using derived system clocks. These are clock signals that are created from other signals, using special circuits called frequency multipliers and frequency dividers. For example, a frequency multiplier could take a 100 MHz clock and create from it a 200 MHz clock signal or a 50 MHz signal.
The exact speed of the system clocks within a PC, and even the number of different clocks, varies from one system to another. Typically, the "main" system clock is the speed at which the chipset and other key motherboard circuits operate; other clocks are derived from it. The table below gives an example of how the system clocks are related in a typical system. In this case, for illustration purposes, I am showing the clocks for a Pentium III system, in this example a "Katmai" Pentium III running at 600 MHz on a 133 MHz system bus:
Device / Bus
Clock Speed (MHz)
Clock Derivation
Processor
600
System Bus * 4.5
Level 2 Cache
300
Processor / 2
System (Memory) Bus
133
--
AGP Video
66
System Bus / 2
PCI Bus
33
System Bus / 4
ISA Bus
8.3
PCI Bus / 4
Note that clock speeds can be multiply derived: speed C can be derived from speed B, which was in turn derived from speed A. Note also that the numbers shown are their "typical" rounded or truncated values, as commonly used in the industry; 133 is really 133.33, and 66 should really round to 67 (since it is 66.66 MHz).


Synchronous (Clocked) Data Transfer
The system uses various signals to control the flow of information (data) around the PC. The system clock (or clocks) are key control signals that are used to synchronize most of the operations that occur within a PC. One of the most important functions of the clock signal is to control the transfer of data over an interface or bus. This is called synchronous or clocked data transfer.
Most interfaces involve one or more data signals that run between devices, as well as various control signals. The control lines tell various devices on the interface when to begin sending data, and when to look for data being sent by other devices. They also facilitate negotiation, which is the process or determining whose turn it is to use a system bus.
Once a data transfer is ready to occur, the clock related to that interface or bus controls the transfer of each piece of data. In conventional operation, one bit of data is transferred across each data line, for each cycle of the clock. The "ticking of the clock" is recognized by triggering on either the rising edge or falling edge of the clock signal. Each subsequent rising or falling edge of the clock triggers the next chunk of data to move across the data line(s) from the sending to the receiving device.
Since the pace of the clock controls the transfer of data, this means that the speed of the clock is also the speed of the bus or interface. Speeding up the clock means that data is transferred more quickly. The total throughput of any bus or interface is equal to the speed of the interface multiplied by the width of the data bus (how many data signals transfer data at once.) See this discussion of system bus operation for more details on this.


Double Transition Clocking
As mentioned in the previous page, the speed of the clock on an interface or bus directly controls the performance or throughput of that interface or bus. The one constant in the PC world is the desire for increased performance. This in turn means that most interfaces are, over time, modified to allow for faster clocking, which leads to improved throughput.
Many newer technologies in the PC world have gone a step beyond just running the clock faster. They have also changed the overall signaling method of the interface or bus, so that data transfer occurs not once per clock cycle, but twice. Usually, this is implemented by having data transfer on both the rising and falling edges of the clock, instead of just one or the other. The change allows for double the data throughput for a given clock speed. This technology is called double transition clocking, as well as several other similar names (such as dual-edge clocking, or double-trigger timing, for example.)
Single transition and double transition clocked data transfer.
In this diagram, the blue signal is the system clock. The green and
purple signals represent data; the "hexagon" shapes are the traditional
way of representing the a signal that at any given time can be either a one
or a zero (and that it doesn't matter for the purpose of the diagram.) The
green signal has its data transferred on the rising edge of the system clock
only, while the purple signal transfers on both the rising and falling edges.
As you can see, the purple signal transfers twice as much data with the
same speed clock. Of course, the timing is also much tighter; only half as
much time is available for the each data bit to be made ready for transfer.
Why bother with this change at all, one might ask? Why not just increase the speed of the clock by a factor of two? Of course, that's been done many times already on most interfaces. To whatever extent possible, interface designers do regularly increase the speed of the system clock. However, as clock speeds get very high, problems are introduced on many interfaces. Most of these issues are related to the electrical characteristics of the signals themselves. Interference between signals increases with frequency, and timing becomes more "tight", increasing cost as the interface circuits must be made more precise to deal with the higher speeds.
Double transition clocking was seen as an obvious opportunity to exploit because it allows increased performance without the engineering problems associated with increasing clock speed. Of course, the two are really independent. The use of double transition clocking has not eliminated engineering efforts to increase clock speed as well.
Note: It is also possible for an interface to be designed to perform more than two data transfers during each clock cycle. TheAGP "4x" mode is so named because it transfers data four times during each cycle.







No comments:

Post a Comment