Bracketology

The NCAA basketball tournament offers some fun exercises in combinatorics and probabilities. Here’s some ideas to think through.

  • Suppose Iowa has even odds against any team they play i.e. Iowa beats any team in the tournament with probability 1/2. What’s the probability that Iowa wins the tournament? A possible answer is 1/64 – if each of the 64 teams have the same chance of winning, then each team wins with probability 1/64.
  • However, what if we assume nothing about the relative strengths of other teams except that Iowa has even odds against each of them? For example, suppose Indiana beats Illinois with a 90% probability. Iowa’s chance of winning the tournament is still 1/64! Iowa wins the tournament if it wins 6 games in a row, which has a probability of \left( \frac{1}{2} \right)^6 \equiv \frac{1}{64} and this does not depend at all on how other teams fare against each other!
  • How many games are played in the tournament in all? The direct way to calculate this is to just add up the numbers of games in the first round + second round etc i.e. 32+16+\dots 1 = 63.
  • There is however, a more fun way to calculate this number. Note that every game results in exactly one team being eliminated. At the end of the tournament, 63 out of 64 teams must be eliminated..
  • More generally, if we can predict each game with probability p, how large must p be to give us even odds of getting a perfect bracket? Clearly, we need to solve for p^{63} = \frac{1}{2} or p=\frac{1}{\sqrt[63]{2}}. This number has fewer nines in it than one may expect..

A just-so story: how the QAM got his stars

The colonial British writer Rudyard Kipling (same guy who wrote The Jungle Book) wrote a book in 1902 called Just So Stories that is now considered a classic of children’s literature. It is a very good book by all accounts and consists of “origin stories” which are fanciful stories about how various animals came to be the way they are. Here’s an illustration from the story of how the Rhino got his Skin:

This is a wonderful way to tell folk tales about animals, but we also often tell this kind of “just so” story (without perhaps realizing it) about the way we solve technical problems: things are the way they are because well, they are Just So! This note is about one example of a Just So story in comms design: the idea of signal constellations. Let’s start from the basics.

Coding and modulation are important building blocks in communication systems. See e.g. this block diagram:from Massey, James L. “Coding and modulation in digital communication.” Zurich Seminar on Digital Communications, 1974. Vol. 2. No. 1. 1974. There is a version of this diagram in every text including ours.

In modern comms systems, the input U to the coding block is always in the form of a sequence of bits (“all comms is digital”) representing digitized information from a message source. In electronic wired and wireless comms systems, the output s(t) of the modulation block is an analog (typically passband) voltage waveform.

The output X of the coding block – which is of course the input of the modulation block – is an intermediary between the digital information bits and the analog modulated waveform. This intermediate representation is entirely in the control of the comms engineer. Traditionally, it is specified in the form of a signal constellation e.g. BPSK, 16-QAM and so on.

Why Constellations?

These signal constellations are presented without explanation or justification as obvious and essential features of comms systems. But they impose an extremely strong a-priori restriction on the modulated waveforms; specifically, they require that (with some simplification) every sample of the modulated analog waveform s(t) are restricted to a finite – and typically very small – set of allowed values represented by the points in the signal constellation.

So how did the concept of signaling constellation become so prevalent? And what are their benefits and drawbacks? Are there alternatives i.e. is it possible to design comms systems that do not use signaling constellations in their traditional form? These are big questions that touch on many subtopics in comms design. Below are a few notes, comments and links. We may revisit this at the end of the term, time permitting.

  • In the comms literature, the questions we raised above falls under the technical problem of “coded modulation”. The coding and modulation blocks taken together perform the channel encoding function in the standard Shannon comms model.
  • The purpose of the channel encoder is to produce a set of output waveforms that are as dissimilar to each other as possible, to allow the decoder at the receiver can tell them apart with maximum efficiency. In other words, we want the possible outputs of the channel encoder to be “spaced out” far apart from each other. (To make this concept precise we need a physically meaningful concept of “distance” between waveforms. This is something we will talk about at length.)
  • There is no theoretical justification for splitting the channel encoding function in this way. We still choose this design for practical convenience, but this will in general incur a performance penalty. The idea behind coded modulation is to minimize this performance penalty by co-designing the two blocks to work well together. This is a somewhat strange dance (what does it mean to “co-design” two functional blocks together, while keep them “separate”?).
  • The main practical reason for maintaining a separation between the coding and modulation blocks is that we do not really know how to design a “nice” set of analog real- or complex-valued waveforms. “Nice” means spaced out far apart from each other (in terms of waveform distance – see above), but also able to be stored, listed and sorted in a compact form.
  • On the other hand, “we” have invented extremely powerful methods for designing “nice” set of symbol sequences from a finite alphabet (like a signaling constellation). These methods come from the mathematics of finite fields, which is also the basis for many tools of modern cryptography.
  • There is a lot more to say about this, and hopefully we will return to this, but briefly, comms engineers now know how to make this “split” architecture work well enough to basically reach the Shannon limit for many waveform channels.

The One True Modulation

Modern electronic communication systems are usually designed to use passband signaling, i.e. the transmitted waveforms have their frequency spectra concentrated in a narrow range of frequencies around a carrier frequency. There are at least two excellent reasons for designing comms systems this way: (1) for fundamental physical reasons explored in another note, wireless links cannot be efficiently operated at low frequencies, and (2) this is a simple, effective and convenient multiplexing technique i.e. a method for multiple simultaneous transmissions to be sent over the same radio or wired link.

A Taxonomy of Passband Modulations

In textbooks and other resources, modulation methods are commonly presented in certain categories: analog v. digital; amplitude v. angle modulation and so on. A typical example is in this technical note from Rohm Semiconductor:

There are advantages to this classification system. It has some basis in the historical development of analog AM and FM. It also makes intuitive connections between digital and analog modulations e.g. ASK and FSK as natural digital variations on AM and FM respectively.

A Holistic Theory of Passband Modulation

In this note, we argue for a different way of thinking about modulations: the differences between all of the above kinds of modulations are superficial, unhelpful and ultimately unproductive; instead we should think of all passband modulations as special cases of one universal type of modulation: Quadrature Amplitude Modulation or QAM. To justify this perspective, we offer the following simple observations.

  1. Any passband signal x(t) can always be expressed in terms of a pair of baseband signals x_I(t),~x_Q(t) (the I- and Q-components) relative to a carrier signal: x(t) = x_I(t) \cos (2 \pi f_ct + \phi_c) - x_Q(t) \sin (2 \pi f_ct + \phi_c).
  2. The above representation is valid for any reasonable choice of carrier signal. In other words, the frequency and phase f_c,~\phi_c of the carrier signal is somewhat arbitrary, but it makes sense to choose the frequency near the center of the passband to minimize the bandwidth of the resulting baseband signals.
  3. This means that we can always represent passband signals in terms of I- and Q-baseband components regardless of how it was originally generated. In particular, FM signals can be expressed in this way: x_{FM}(t) = A \cos \left( 2 \pi f_c t + \phi(t) \right),~\phi(t) = \phi_c + k_f \int m(t) dt has the I- and Q-components: x_I(t) = A \cos \phi(t),~x_Q(t) = A \sin \phi(t).
  4. Note that the above is true of any reasonable FM signal; we do NOT limit ourselves to the special case of “narrowband” FM sometimes highlighted in textbooks where \phi(t) \ll  \pi,~\forall t and the AM connection is more obvious x_Q(t) \approx A \phi(t).
  5. To phrase it crudely, FM is a special case of quadrature AM where the Q-component is chosen to make the envelope of the modulated signal constant over time!
  6. For digital modulations, PAM and PSK constellations are obviously geometric special cases of QAM and there is nothing to be gained in making artificial distinctions between them.
  7. At first sight, FSK seems to have distinct features: e.g. (a) an FSK modulated signal may be expected to show multiple distinct peaks on a spectrum analyzer, (b) unlike FSK, PSK and PAM may have discontinuities at symbol boundaries. However, these differences, if they are present, are signs of a poorly designed system. An FSK signal with multiple visually observable peaks represents an inefficient use of bandwidth, and pulse-shaping should eliminate any signal discontinuities in PSK, PAM or QAM signals.

A more reasonable distinction can be made between linearly modulated signals with geometrically simple QAM constellations and more complex modulations e.g. OFDM, but in all cases, it remains true that the information content in a passband signal can be expressed in terms of baseband I- and Q-components or their samples.

Close Reading – “early wire and radio art”

The literature from the early 20’th century on the parallel development of the “wire and radio art” makes for fascinating reading today. In this note, we will look at some excerpts from this article:

A. A. Oswald, “Early History of Single-Sideband Transmission,” in Proceedings of the IRE, vol. 44, no. 12, pp. 1676-1679, Dec. 1956, doi: 10.1109/JRPROC.1956.275033.

Consider specifically this remarkable paragraph:

The first step was the recognition of sidebands per se. Until well after Carson’s invention, there seems to have been no general, clear-cut recognition outside the Bell System, that modulation of a carrier by voice waves results in side frequencies above and below the carrier. LeBlanc, in describing his multiplex system,3 speaks of the modified high-frequency wave and calls for a channel spacing “high compared with the pitch of the sound waves.” This might be construed as implying that a transmission band is involved but LeBlanc makes no comments in this direction. Fleming4 treats the modulated carrier as a wave of constant frequency but varying amplitude. Stones as late as 1912 says, “There is, in fact, in the transmission of a given message, (by carrier) but a single frequency of current involved.”

The phrase “recognition of sidebands” sounds strange to a modern reader. What does it mean to fail to recognize sidebands? The last sentence is easier to understand: a high frequency carrier remains very nearly a pure sine-wave when modulated by varying its amplitude in sympathy with a slowly-changing voice waveform. The frequency spread of such a modulated signal should not be significantly different from that of a single-frequency sine-wave.

This is a true and fairly trivial observation. In modern language, we would say the Fourier transform of a gently modulated carrier wave i.e. “a narrowband signal” looks approximately like a Dirac delta function at the carrier frequency!

Enter now a “multiplex system” i.e. a comms link over which supports multiple simultaneous transmissions. Now it becomes necessary to quantify, at least crudely, the frequency spread of each transmission e.g. the simple heuristic: channel spacing “high compared with the pitch of the sound waves.”

When the multiplex systems start to scale up, and the frequency spacing becomes smaller, we eventually get to a place where the “bandwidth” of the modulated carrier needs to be specified more precisely. This eventually leads to the discovery of the existence of two sidebands and their redundancy, using both experimental and mathematical methods. Hence “recognition of sidebands”!

As a final note, the “high frequencies” involved in these early carrier systems were at frequencies far lower than what we described in an earlier note as the “prime beachfront real-estate” of the wireless spectrum!
from https://en.wikipedia.org/wiki/High_frequency
The names for the various frequency bands as they started to be used is itself a fascinating story for another day.

EM theory for comms – Part 2: Wireless Beachfront Real-Estate

In theory, we can build wireless comms over a wide range of frequencies ranging from near-DC (i.e. low frequency, very long waves e.g. 1000 mi) to ultra-violet (very high freq, very short e.g. 10-7 m). In practice, there is a small band of spectrum which is, basically, prime EM real-estate. It is easy to understand what these frequencies are and why they are so desirable based on ideas we have already been discussing.

To start from the basics: any current carrying wire generates wireless radiation i.e. functions as a transmitting antenna.

(Conversely, any piece of wire exposed to ambient EM waves can function as a receiving antenna. There is a famous reciprocity principle that asserts roughly that the properties of a receiving antenna are closely related to the transmitting properties of the same antenna. Physical/geometric structures that makes good transmitting antennas also make good receiving antennas and vice versa. We will focus on transmitters in our discussion. See Wikipedia article here for some details: reciprocity implies that antennas work equally well as transmitters or receivers, and specifically that an antenna’s radiation and receiving patterns are identical.)

Any time-varying current creates a H-field (i.e. magnetic field) around it, which generates a E-field, which generates an H-field and so forth. This is what is called an EM wave. As an aside, the idea of imagining EM fields as the result of superposition of contributions from a primary source current which produces H-fields which act like a secondary source itself producing more E- and H-fields and so forth, is a very powerful and elegant tool dating back to the 17’th century physicist Huygens and more recently popularized by celebrity scientist Richard Feynman. From Wikipedia:

In 1678, Huygens proposed that every point reached by a luminous disturbance becomes a source of a spherical wave; the sum of these secondary waves determines the form of the wave at any subsequent time.

Usually, we don’t want our metal wires in circuits acting like transmitting antennas; this represents undesirable power dissipation and produces EMI – basically EM pollution – that may mess up other nearby circuits. This is why we twist our twisted-pair wires, and shield our coax cables. The exception, of course, is wireless transmitter circuits where we want to produce a lot of radiation.

Electrically Small Antennas. Recall our definition of the corner frequency of a piece of wire as the frequency at which the electrical length of the wire is unity. At much lower frequencies, the waves are very long and the wire is said to be electrically small. And electrically short antennas are famously inefficient! From R. C. Hansen, “Fundamental limitations in antennas,” in Proc. IEEE, 1981:

With the miniaturization of components endemic in almost all parts of electronics today, it is important to recognize the limits upon size reduction of antenna elements.These are related to the basic fact that the element’s purpose is to couple to a free space wave, and the free space wavelength has not yet been miniaturized!

It is important to emphasize that this is a fundamental physical limit, rather than a shortcoming of our design methods. A very simplified intuitive explanation is as follows. Recall our earlier discussion of EM waves as E- and H-fields progressively generating each other. To get a energetic EM wave going, we need a strong source H-field generating a strong E-field. Indeed, in a propagating EM wave in free space, energy of the wave is split exactly in half between the E- and H-fields.

At low frequencies. we can have strong E-fields, or strong H-fields, but it is very hard to get both. As an example, we can make a large E-field by depositing a charge on a capacitor. However, at low frequencies, the associated charging current – and therefore the H-field – is very small. The resulting radiation is ultimately limited by the weaker of the E- and H-fields.

The beachfront real-estate of the EM spectrum

What frequency is high enough to efficiently generate EM waves? Recall from our discussion of transmission lines in Part 1, that propagating EM waves naturally arise when wavelengths are comparable to the physical dimensions of a circuit. The animation below shows resonance in a dipole one-half wavelength long:
(Source: Wikipedia)

Wavelengths that are close to human-scale (λ~1 ft) are naturally the most convenient frequencies for wireless comms. The corresponding frequencies – roughly 300 MHz to 3 GHz or so – represents the most desirable real-estate in the EM spectrum. At frequencies significantly lower than this, we are in the challenging “electrically small” regime discussed above.

Subprime Real-Estate

But what about higher frequencies? By our reasoning above, it should be possible to build devices with very small form factors that can still radiate efficiently at very short wavelengths. Also, there is naturally more bandwidth available at high frequencies.

Unfortunately, these potential advantages are negated by a couple of serious drawbacks. First, circuit design is inherently challenging ay very short wavelengths. In addition to transmission line effects, various kinds of parasitic resistances and reactances that are negligible at lower frequencies become significant. Transistor gain also degrades with frequency. However, these are technological limitations, and can be expected to improve over time.

A second class of limitations is more fundamental: short waves tend to be absorbed and obstructed by objects in the physical environment that are invisible to longer waves. A simple mental model helps visualize this important phenomenon quite nicely.

What is the size of a EM wave?

“How many angels can dance on the head of a pin?” (image: Laura Guerin
Source: CK-12 Foundation):

With all appropriate disclaimers, picture EM waves as a stream of photons which are spheres with a radius equal to their wavelength. Thus a 100 MHz wave consists of long waves with photons 3 m in radius. Such long waves penetrate right through walls and other physical objects that are “thin” compared to their size. In contrast, a 10 GHz wave with photons 3 cm in size will be mostly absorbed or reflected by a concrete wall. Shorter waves in the mm-wave band are affected by even smaller objects such as dust or rain drops whose sizes are comparable to that of the photons.

In practice, waves shorter than 3 cm or so (> ~10 GHz) are limited to Line-of-Sight propagation. This makes these frequency bands inconvenient for indoor or mobile wireless comms.

It turns out that extremely short EM waves (e.g. in the visible light spectrum) do not propagate well for wireless comms, but can be caged and guided over very long distances with very high efficiency. Thus optical fiber is the medium of choice for present-day non-wireless comms applications.

Compared to radio waves, optical fiber is a nearly perfect comms medium and therefore somewhat uninteresting from a comms design perspective. Nevertheless, the physics of wave-guides presents an interesting contrast with both transmission lines and free space wireless. An intuitive discussion of wave-guides and optical fiber is a topic for Part 3.

Basic EM theory for comms engineers – Part 1: Metal Wires

The modern theory of electronic communication, due to Claude Shannon, is famously abstract. It does not concern itself with the contents of the messages being exchanged, nor with the physics of how the information transfer takes place. From the source:

The fundamental problem of communication is that of reproducing at one point either exactly or approximately a message selected at another point.

In theory, this applies alike to communication using radio waves or smoke signals, or even to non-communication applications such as storing information on a magnetic tape. While this is beneficial in many ways, real-world communication engineering almost universally involves sending messages over electromagnetic waves. The actual practice of this engineering discipline will benefit a lot from some knowledge of the physics of EM waves.

Fortunately, this is a case where a little bit goes a long way. This series of short notes is intended to provide some very basic information about EM wave physics specifically tailored for comms engineers. The presentation is informal and simplified and aims to build a quick mental model to think intuitively about EM waves, rather than in-depth expertise. (Also, I have tried very hard to avoid any math! But don’t fret, there is no shortage of math in comm theory..)

Three types of comms links

Electronic comms systems have been built and used over a wide variety of physical media e.g. twisted-pair copper wiring, coax cable for CATV or computer networks, optical fiber and power-lines, in addition to wireless links over various frequency bands.

Fundamentally though, all of these media fall into one of just 3 categories: (a) metal wires, (b) wireless and (c) optical fiber. Each of these involve EM wave propagation, but the physical mechanisms are quite different and represent different special cases of the Maxwell equations (a vivid demonstration of the tremendous versatility of these equations..).

1. Communication over conducting wires.

This category includes most wired media used in modern comms except optical fiber, and is theoretically the simplest, and historically the first, type of media used for electronic communication. See e.g. this diagram of an early telegraph system (Sömmering’s electric telegraph in 1809, from Wikpedia.)

For short link distances and slow signaling speeds, a metal wire acts like a short circuit; a voltage change at the transmitter instantaneously appears at the receiver. However, once the signaling speed becomes fast enough, it becomes necessary to account for the finite speed of signal propagation (which is approx 1 ft/ns in free space, slower in other media, but in the same order of magnitude).

Consider as a simple example, a telegraphy-like system sending symbols encoded as dots or dashes, by applying a short or long pulse at the transmitter once every T = 1 μ sec. If the link distance is longer than 1000 ft, the propagation delay over the link is longer than T; by the time the receiver sees a symbol, the sender is already transmitting a new symbol. The metal wire no longer behaves like a short circuit, not even approximately!

A signaling rate of one symbol every 1 μ sec requires a voltage waveform with a bandwidth of at least 1 MHz. An EM wave with a frequency of 1 MHz has a (free-space) wavelength of about 1000 ft, which is equal to the link distance in our example. This is a simple, easy and intuitive criterion to remember: current carrying metal wires no longer function like short circuits when the voltages in the circuit contain frequencies whose wavelengths are close to or shorter than the length of the wire.

Let us call this the corner frequency of the metal wire. Note that this corner frequency is a purely geometric quantity that depends only on the length of the wire and not on any material property of the wire. (For a specific frequency, the electrical length of a conducting wire is the length of the wire as a fraction of the wavelength. Thus, the corner frequency is defined as the frequency where the electric length of the wire is unity.)

In this regime, metal wires can be modeled as transmission lines i.e. a distributed network of series inductances and shunt capacitances that account respectively for the voltage and current variations over the length of the wire see e.g. (from Wikipedia):

In addition, phenomena such as the skin effect mean we can no longer treat metal wires as near-lossless conductors (hence the R and G terms in the Heaviside model shown above). While it is possible to build useful RF and microwave circuits with metal wires functioning as transmission lines, their properties as a comms medium become severely degraded beyond these frequencies.

It turns out that the frequencies where metal wires stop functioning like short circuits are precisely the frequencies that we like to use for wireless comms. This is not a coincidence: when metal wires behave like short circuits, it is hard to turn them into antennas; conversely, we can build efficient antennas using conducting wires most efficiently when we are operating close to or above their corner frequencies.

Thus, there is a sweet spot in the EM spectrum with frequencies ranging from roughly 300 MHz (λ~1m) to 3 GHz (λ~10cm) that is best suited for wireless comms; waves much longer than this are hard to generate and detect and waves much shorter than this don’t propagate well. We will explore this in more detail in Part 2 of this series.

A practice problem with diodes

Here’s an practice problem with diodes that has some interesting aspects to it. This circuit is about as complicated as diode circuits get and performs a simple and potentially useful function. In other words, output voltage V_o varies with the inout V_1 in a simple and potentially useful way. Let’s see if we can figure out what that input-output relationship is.

practice problem with diodes

You should eventually build this circuit in SPICE, and perform DC or transient analysis to see what it does. But first, let’s use a simple ideal piecewise diode model to understand this circuit. If you are able to fully understand and explain the behavior of this circuit, you know everything you need to to know about diodes for the purposes of our class. For that reason, this is a good practice problem for your exam 1.

We will assume all 4 diodes are identical and have a constant voltage V_\gamma = 0.7 V in forward conducting mode and zero current otherwise.

Because there are 4 diodes each of which can be on or off, there are as many as 2^4=16 possible states for the diodes in this circuit for a given input voltage V_1. It can be quite tedious to analyze each of those cases for every possible value of V_1.

With a little bit of thought though, we can narrow this down quite substantially. Here’s a few hints. See if you can you explain why the following observations must hold:

  • The voltage of node A cannot be greater than 5 V i.e. V_A \leq 5. Likewise, we must have V_B \geq -5 V. Note however, that there are no constraints on how low V_A, and how high V_B, can get.
  • Node C cannot be at more than V_\gamma above node B i.e. V_C \leq V_B + V_\gamma.
  • By the same reasoning V_A \leq V_C+V_\gamma \leq V_B + 2 V_\gamma.
  • If D_1 and D_3 are both off, then V_A \equiv 5 V. Similarly, D_2 and D_4 are both off, then V_B \equiv -5 V.
  • The two previous observations show that it is not possible for all 4 diodes to be OFF at any time. Can you see why?
  • By considering a small number of “corner cases” for extreme values of V_1, we should be able to connect them to figure out what happens for “in between” values of V_1.
  • Case1: When V_1 becomes very large e.g. V_1=15 V, we would expect D_1 to be OFF, and D_2 to be ON. This causes V_B to be large enough that D_4 must be off. The output voltage V_o will then be close to V_A which in turn will be close to 5 V.
  • Case 2: Can you repeat the reasoning from Case 1 for small V_1 e.g. V_1=-15 V?
  • Case 3: When V_1 is small i.e. V_1 \approx 0, purely by symmetry, we would expect V_o \approx 0. What does this mean for the state of each of the diodes and the other node voltages?

Thinking about ideal op-amps

In your previous Circuits class, you were introduced to ideal op-amp circuits. In our class this semester, we will be building on this and become familiar with a small number of simple, but very useful op-amp circuits (e.g. inverting and non-inverting feedback amplifiers, integrators, adders and a few variants of these). Meanwhile we have been doing some HW problems to refresh your knowledge of ideal op-amps. Here’s a quick summary of what you need to know.

First thing to know about op-amps is that though their ideal circuit behavior can be described in very simple terms, they are quite complex internally. How complex? Here’s a look at the internals of a classic model (which we may return to later in class if we have time). This complexity is unlike other circuit elements such as sources, resistors, capacitors and transformers that you learned about in your previous class. At this point, we do not know how to analyze arbitrary op-amp circuits even under ideal conditions, much less design them; a naively designed op-amp circuit can easily show instability.

All we know how to do is to solve approximately for voltages and currents in a class of negative feedback op-amp circuits that we trust to be stable and within their operating limits with DC (or low frequency) inputs. Even this limited knowledge can be extremely valuable, but it is wise to be appropriately humble about this.

A simple model for an op-amp is shown in this figure (from here):

The input currents i_+,~i_- are both very small (ideally zero), and the gain A very large (ideally \infty). Thus, a very small voltage difference v_+-v_- at the op-amp inputs can produce a large output voltage v_0.

In the kind of circuits we are interested in, the op-amp “looks” at the output voltage through a feedback connection and adjusts the output voltage and current v_0,~i_0 to be whatever they need to be, to force the currents and voltage difference at the input terminals to be zero i.e. i_+=i_-=0,~v_+=v_-. In other words, the op-amp adjusts its output voltage and current in such a way that the input terminals look simultaneously like an open circuit and a short circuit! It is very elegant and almost seems magical when you set it up correctly.

And this is all you need to know to solve the kind of op-amp circuits we will encounter this semester! Sort of…

Unfortunately, it is too easy to get carried away by the simplicity of the ideal op-amp model, which is how you end up with this sloppy example (from your official textbook from Circuits class!):

It is easy enough to analyze this circuit using the ideal op-amp model: the op-amp “looks” at the output voltage at the inverting input through the feedback network which is basically a simple voltage divider: v_- = \frac{1k}{101k} v_0. The op-amp then forces the output voltage v_0 to be whatever it needs to be to make v_-=v_+ \equiv 1~mV.

But can you see what’s wrong (or missing) in the schematic above that may cause this circuit to not work as you may expect? (Hint. Can you use a KCL around the red surface surrounding the op-amp and argue that the output current must always be zero? What, if anything, is wrong with that argument?)

Oscillography

Starting with this week’s lab, we are going to get comfortable with using an oscilloscope very quickly. Fortunately, modern digital ‘scopes have an accessible interface for simple waveform measurements and are therefore quite easy to get started on.

Here’s a very short introductory video on getting started with an oscilloscope:

(I really like the Keysight YouTube channel and I expect you will see me link to many of their videos over the course of this semester. I would also encourage you to browse their channels yourself. The content is interesting – and well-edited. And please share your own links, comments or questions!)

We mentioned in passing the effect of probe attenuation in the waveform measurement. Here’s a short video showing the effect of probe-loading (which you can think of as similar to the famous quantum observer effect where the act of measurement changes the thing you are trying to measure):

We also talked a bit about the history of analog oscilloscopes especially the versatile CRT. Tektronix is an important company in this history. Here’s a video that was apparently produced to mark the sad occasion of the closing of their manufacturing operation:

Finally, here’s a silly one:

Drain pipes for electrons

In class, we referenced a hydraulic analogy to current flow in circuits. This analogy has deep historical roots but has long been controversial (“does it do more harm than good?”). The derisive term “drain-pipe theory” for this analogy is attributed to the British physicist Oliver Lodge.

My personal opinion is that this analogy is very useful to solidify the basic physical concepts of charge (total quantity of charge like gallons of water), current (rate of flow like gallons per second) and voltage (energy level difference per unit charge like elevation in a gravitational field). Of course, like any analogy (or physical model for that matter), this breaks down, when you think about it more than a little. In fact, the analogy very quickly loses its usefulness when you start looking for hydraulic analogues of specific circuit elements such as batteries and resistors, see e.g. here for the kind of unproductive knots you can easily find yourself tied up in.

That said, this analogy will work fine at a superficial level for almost everything we do this semester (up until the very last week in fact), and the gravitational potential paradigm referenced in the Wikipedia article has the official HawkEE endorsement!

To start with, you may find the animations here useful to visualize the concepts of current and voltage.

Happy Plumbing with electrons!