Authors: Hileman, Wesley

26 May 2020

**Keywords:** Complex Number, Fourier Transform, Impedance, Sinusoid,

**Abstract:** A description of the concept of impedance as it relates to electric circuits. Topics discussed include the impedance of typical circuit elements (resistors, capacitors, and inductors), the versatility of the sine wave, and the role of complex numbers in circuit analysis.

Before I learned to analyze analog circuits in a university setting, I regularly tried to sort out the meaning of *impedance*. What was this mysterious quantity, ubiquitous among electrical engineers, that had bested me at every confrontation? Every explanation I heard seemed a riddle, full of half-truths and missing information. "Impedance is like resistance for capacitors and inductors." "Impedance is the measure of a circuit's opposition to current flow expressed as a complex quantity." "Impedance is the effective resistance of a component to alternating current." Without a solid background in electronics, these statements drove me to bewilderment.

It's possible to understand and apply impedance without extensive knowledge of analog circuits. This article introduces the concept for the student trapped in the "impedance rut". The only prerequisites are knowledge of basic algebra, trigonometry, calculus, and electronics (voltage, current, and resistance.)

Engineers apply the term "impedance" to several seemingly different quantities in electronics, but the harmonizing definition is this:

**In the general sense, impedance is the quantity that relates the voltage present across a component ( however that voltage may be expressed) to the current flowing through the component (however that current may be expressed) in a time-independent fashion.**

In mathematical form, a component's impedance is the time-independent quantity $Z$ such that

$$ V = Z \cdot I \hspace{0.5cm}\textrm{(Impedance)}\label{eq:impedance} $$

where $V$ is the representation of the voltage across the component and $I$ is the representation of the current flowing through the component. The equation's resemblance to a fundamental relationship is readily discerned; when $V$ is the voltage across a resistance $R$, $I$ the current through $R$, and $Z = R$, the formula collapses to Ohm's law, showing resistance is a form of impedance. In fact, impedance generalizes the concept of resistance.

Take care to note the impedance equation is *decoupled with time*. That is, the time variable $t$ does not appear in the equation and $Z$ in no way depends on time. $V$ and $I$ may very well evolve with time, $V = V(t)$ and $I = I(t)$, but they are related in a strictly instantaneous sense. $V(t)$ in no way depends on past values of $I(t)$－the voltage across the component at a particular instant depends only on the current flowing through the component at that same instant and vice versa. The V-I relationship of a resistor, as described by Ohm's law, meets the time independence constraint.

Why is impedance, defined in this way, a useful concept? Consider a purely resistive circuit. With Ohm's law come a wide assortment of algebra-based circuit analysis techniques: series and parallel resistance reductions, delta-to-wye conversion, voltage-to-current source transformation, Thévinin and Norton equivalent circuits, and the node-voltage method to name a few. By generalizing the concept of resistance, we can apply resistor analysis techniques to circuits containing other components, using impedance in place of resistance. This is provided we can find a way that the components satisfy the impedance equation.

The natural next step is to apply the impedance equation to a capacitor or inductor. Under this line of action, however, we quickly reach an impasse. Consider an inductor's voltage-current relationship:

$$ V = L\frac{dI}{dt} \hspace{0.5cm}\textrm{(Inductor Voltage-Current)}\label{eq:inductor} $$

Here, $V$ is the voltage across the inductor with inductance $L$, $I$ is the current flowing through $L$, and $t$ is the time variable－and also the first hint of trouble. The equation reveals that the voltage across an inductor depends not on the flow of current itself, but on its *time derivative*－the current's rate of change with time. As such, $Z$ cannot possibly be defined so that the impedance equation, independent of time, reduces to the inductor voltage-current equation. Alas, our plan is foiled!

Does this mean an inductor's impedance is undefined? No, as it turns out, but we must stress our wit and change the way that we represent voltage and current. If we restrict $V$ and $I$ to vary *sinusoidally*－that is, as pure sine waves－the impedance of an inductor or capacitor (or other two-terminal component) can be defined as such:

**In the common sense, impedance is the quantity that relates the sine-wave voltage present across a component to the sine-wave current flowing through the component.**

Why should we restrict ourselves to sinusoidal signals? And how can we represent sine-wave voltage and current such that the impedance equation is valid? The next two sections address these issues.

The prominence of the sine wave in electronics (and physics in general) is unarguable. From analysis of signals and circuits to radio transmissions, the sinusoid is present. In our case, we define impedance with the sinusoid. Just why is the sine wave the *de facto* standard?

**As it turns out, most signals can be thought of as sums of sine waves, though those sums could be infinite and uncountable.**

In this way, the sinusoid serves as the "building block" of general signals, and a restriction to sine waves is in effect not a restriction at all. This may be difficult to imagine, so let's ponder the issue with several examples.

First, let's build a signal by adding two sine waves. Give them frequencies of 95 and 105Hz:

$$ w(t) = \sin(2 \pi \cdot 95t) + \sin(2 \pi \cdot 105t) $$

Combining two similar sinusoids in this way, we might expect a waveform not too different from the component sine waves themselves. Plotting the waveform, however, reveals a surprise:

```
# warble.py
import math
import numpy as np
import matplotlib.pyplot as plt
tt = np.arange(0, 0.3, 0.0001, dtype=np.float32)
ww = np.sin(2*math.pi*95*tt) + np.sin(2*math.pi*105*tt)
plt.plot(tt, ww)
plt.show()
```

Although the signal's base is a 100Hz sinusoid, matching our expectation, the signal also fades in and out regularly－the base sinusoid's amplitude is *modulated* by a sine wave with a frequency of 10Hz.

Let's step up our ambition and create a signal by adding an *infinite* number of sine waves. For this to work, the amplitudes of the sine waves must continually become closer to zero as we add additional terms; otherwise, the sum *diverges*, failing to reach a stable limit. (This condition is necessary, but not sufficient, to guarantee convergence.) Define the sum as such:

$$r(t)=\sum_{k=1,3,5\ldots}^{\infty}\frac{(-1)^{\frac{k-1}{2}}}{k^{2}}\sin(2\pi\cdot10kt)$$

The amplitudes of the summed sinusoids vary as $\frac{1}{k^2}$ and their frequencies as $10k$; the $(-1)^{\frac{k-1}{2}}$ factor alternates the sign of each sinusoid in the sum. We obtain a curious result plotting an approximation of this infinite-sum signal:

```
# triangle.py
import math
import numpy as np
import matplotlib.pyplot as plt
# Number of terms used to approximate infinite sum
NTERMS = 100
tt = np.arange(0, 0.3, 0.0001, dtype=np.float32)
rr = np.zeros(len(tt), dtype=np.float32)
for k in range(1, NTERMS*2, 2):
amp = (-1) ** int((k - 1)/2)
amp *= 1/(k ** 2)
rr += amp * np.sin(2*math.pi*10*k*tt)
plt.plot(tt, rr)
plt.show()
```

We've built a triangle wave with sharp edges from smooth sinusoids! How is this possible? The key is infinite number of sine waves in the sum; each additional sinusoid sharpens the edges of signal, resulting in a triangle as the number of terms in the sum approaches infinity.

Finally, let's combine an *uncountable* number of sine waves to generate a signal. To do this, we'll leverage the integral from calculus. Suppose we add all unity-amplitude sinusoids with frequencies from 0 to 10Hz:

$$d(t)=\intop_{0}^{10}\sin(2\pi\cdot ft)df$$

Plotting an approximation of the signal yields yet another incredulity:

```
# damp_osc.py
import math
import numpy as np
import matplotlib.pyplot as plt
# The df to use when approximating the integral
DF_HZ = 0.001
tt = np.arange(0, 0.7, 0.0001, dtype=np.float32)
dd = np.zeros(len(tt), dtype=np.float32)
for k in range(0, round(10/DF_HZ)):
f = DF_HZ*k
dd += np.sin(2*math.pi*f*tt) * DF_HZ
plt.plot(tt, dd)
plt.show()
```

This time, we've created a non-repeating (aperiodic) signal from sinusoids, repetitive (periodic) by nature! The key here is that it takes an uncountable number of sinusoids to generate the aperiodic signal.

We've shown we can build both periodic and aperiodic signals with sinusoids. But how do we find a sinusoidal decomposition of an arbitrary signal? That is, given a signal we're interested in, like a square wave or exponential signal, how can we find a sine-wave sum or integral that converges to the signal? The *Fourier transform* is the answer and you can read more about it in this Wikipedia article.

Seeing the versatility of the sine wave, we move to the problem of representing a sinusoid for use in the impedance equation.

The impedance equation, $V = Z \cdot I$, requires voltage and current expressed as single quantities, namely $V$ and $I$. If voltage and current vary as sinusoids, how is this possible? After all, *three* quantities describe a sine wave, frequency ($f$), amplitude ($A$), and phase offset ($\phi$):

$$x(t) = A\sin(2\pi \cdot ft + \phi)$$

Somehow, we must squeeze these quantities into one for use in the impedance equation; we need to create a quantity that encapsulates a sinusoid's frequency, amplitude, and phase.

Before we attempt to find such a quantity, we can simplify the issue by making an observation about the behavior of circuits created with *linear and time-invariant (LTI)* components (ideal resistors, inductors, capacitors, and operational amplifiers among others). When we drive such a circuit with sine-wave voltages and currents of a given frequency $f$, all other voltages and currents in the circuit must vary sinusoidally *with the same frequency* $f$ (in steady-state operation):

**A circuit built entirely of linear and time-invariant components (resistors, inductors, capacitors, op amps, etc) affects only the amplitude and phase of the sine-wave voltages and currents applied to it.**

Given that frequency is constant in a circuit driven by sinusoidal voltages and currents, we need not specify it for *every* voltage and current in the circuit; rather, we write the incident frequency in the margin and represent individual sinusoids with their amplitudes and phase offsets.

To express sine-wave voltages and currents for use in the impedance equation, we now need find a quantity encapsulating only two quantities: amplitude and phase. *Vectors*－in the context of linear algebra－offer one solution. A single 2-vector suffices to contain the amplitude $A$ and phase $\phi$ of a sine wave $x(t)$:

$$ X = \left[\begin{array}{c} A\\ \phi \end{array}\right] $$

The column vector $X$ encapsulates all of the dynamic information stored in the sinusoid $x(t)$, and it appears we've found a suitable single-quantity representation for sinusoids. But there's a hiccup: if we represent voltage $V$ and current $I$ with 2-vectors, how do we express the impedance $Z$ to properly relate the vectors by multiplication in the impedance equation? And how do we define multiplication on vectors in the first place?

To find out, we must determine the nature of the relationship between the vector-voltage and vector-current across a circuit component such as a resistor, inductor, capacitor, or any LTI element in general.

Ohm's Law relates the voltage $V$ across a resistance $R$ at any point in time $t$ to the current through that resistance at $t$:

$$V(t) = R \cdot I(t)$$

When the voltage and current across a resistor vary as sine-waves, this means the voltage sinusoid equals the current sinusoid scaled by the resistance $R$. In mathematical form, when

$$ V(t) = A_v\sin(2\pi \cdot ft + \phi_v) \hspace{0.5cm}\textrm{and}\hspace{0.5cm} I(t) = A_i\sin(2\pi \cdot ft + \phi_i) $$

then Ohm's Law requires

$$ A_v \sin(2\pi \cdot ft + \phi_v) = A_i R \sin(2\pi \cdot ft + \phi_i) $$

Equating these sinusoids' amplitudes, phase shifts, and frequencies, we arrive at the conclusion that the amplitude of the voltage sine-wave equals that of the current sine-wave scaled by $R$ and the phase offsets of the two waves are equal. After noting the frequency $f$ of the two waves remain constant as expected, we write this in 2-vector form as such:

$$ \begin{equation} \left[\begin{array}{c} A_v\\ \phi_v \end{array}\right] = \left[\begin{array}{c} A_i \cdot R\\ \phi_i \end{array}\right] \label{eq:vvi-res} \end{equation} $$

Here's the graphical relationship between the voltage and current in a 10Ω resistor when the current varies as a sine-wave with $A = 1$ ampere, $\phi = \pi/6$ radians, and $f = 100$ hertz:

The inductor voltage/current equation relates the voltage $V$ across an inductance $L$ to the current $I$ flowing through the inductance at any time $t$:

$$V(t) = L \cdot \frac{d}{dt}I(t)$$

Proceeding as in the resistor case, when the voltage and current vary sinusoidally the above equation requires:

$$ \begin{align*} A_v \sin(2\pi \cdot ft + \phi_v) &= L \frac{d}{dt}[A_i \sin(2\pi \cdot ft + \phi_i)] \\ &= L A_i \cos(2\pi \cdot ft + \phi_i) \cdot 2\pi f \\ &= A_i \cdot 2\pi f L \sin(2\pi \cdot ft + \phi_i + \frac{\pi}{2}) \end{align*} $$

(Note the application of the chain rule for differentiation and the trigonometric identity $\cos(\theta) = \sin(\theta + \pi/2)$.)

A key observation here is that differentiating a sinusoid with a given frequency produces another sinusoid of the same frequency, but modified amplitude and phase. Ergo, when the current in an inductor varies sinusoidally, the voltage also varies sinusoidally with the same frequency. The converse is also true; when the voltage across an inductor varies as a sine-wave, the current must vary as a sine-wave.

Equating the voltage and current sinusoids of the inductor $L$, we find the amplitude of the voltage wave equals that of the current wave scaled by the factor $2\pi fL$; also, the phase of the voltage wave is offset by $\pi/2$ radians from the current wave:

$$ \begin{equation} \left[ \begin{array}{c} A_v\\ \phi_v \end{array} \right] = \left[ \begin{array}{c} A_i \cdot 2 \pi f L\\ \phi_i + \frac{\pi}{2} \end{array} \right] \label{eq:vvi-ind} \end{equation} $$

Does this finding agree with our mental model of an inductor? To find out, let us examine each implication in turn.

**For a given current amplitude $A_i$ and frequency $f$, the voltage amplitude $A_v$ is directly proportional to the inductance $L$; increasing $L$ increases $A_v$.**Inductance gives the total magnetic flux developed through the coils of an inductor per unit of current flowing through those coils. Given a fixed current flowing through an inductor, a higher inductance means a larger amount of flux developed though the inductor's coils. When the current through $L$ varies sinusoidally, the flux also varies sinusoidally with the same frequency and phase, but with amplitude equal to $LA_i$. Thus, the higher $L$ is, the*steeper*the variance in flux with time (a sinusoid's amplitude contributes to its "steepness", i.e. its maximum slope). Since the voltage across an inductor equals the rate of change of flux with time, the voltage amplitude also increases with $L$.**For a given current amplitude $A_i$ and inductance $L$, increasing the frequency $f$ increases the voltage amplitude $A_v$.**By the same line of reasoning applied in (1), the flux developed through the coils of $L$ varies sinusoidally with the same frequency as the current. Thus, increasing the current frequency $f$ increases the steepness of the flux variance (a sinusoid's frequency also determines its "steepness"). This increases the voltage amplitude $A_v$.**For a given inductance $L$ and frequency $f$, increasing the current amplitude $A_i$ increases the voltage amplitude $A_v$.**As shown in (1), the amplitude of the sinusoidal flux developed through the inductance $L$ is $LA_i$. Ergo, the higher $A_i$, the steeper the variance in flux in time and hence the higher the voltage amplitude.**Regardless of the inductance $L$, frequency $f$, current amplitude $A_i$, and voltage amplitude $A_v$, the voltage sinusoid always leads the current sinusoid by $\pi/2$ radians; that is, peaks in the voltage sinusoid always come before peaks in the current sinusoid.**The inductor voltage-current equation stipulates the voltage across an inductor depends directly on the rate of change of current flowing through the inductor; therefore, the voltage peaks at the points in time where the current is the steepest. This corresponds to an offset of $\pi/2$ radians.

Here's a plot of the voltage and current in an $L = 3$ mH inductor when the current varies as a sine-wave with $A = 1$ ampere, $\phi = \pi/6$ radians, and $f = 100$ Hz:

We follow the same procedure as in the inductor case to find the relationship between the vector-voltage and vector-current of a capacitor. However, we must invert the result since the capacitor current-voltage equation relates current to voltage (as opposed to voltage to current in the inductor voltage-current equation):

$$I(t) = C \cdot \frac{d}{dt}V(t)$$

When the voltage and current vary sinusoidally the above equation stipulates:

$$ \begin{align*} A_i \sin(2\pi \cdot ft + \phi_i) &= C \frac{d}{dt}[A_v \sin(2\pi \cdot ft + \phi_v)] \\ &= C A_c \cos(2\pi \cdot ft + \phi_v) \cdot 2\pi f \\ &= A_v \cdot 2\pi f C \sin(2\pi \cdot ft + \phi_v + \frac{\pi}{2}) \end{align*} $$

Equating the amplitudes of phases of the sinusoids on either side of the equation, we arrive at the vector-current/vector-voltage relationship for the capacitor:

$$ I = \left[\begin{array}{c} A_i\\ \phi_i \end{array}\right] = \left[\begin{array}{c} A_v \cdot 2 \pi f C\\ \phi_v + \frac{\pi}{2} \end{array}\right] $$

Re-arranging the equality yields the vector-voltage/vector-current relationship; since $A_i = A_v \cdot 2 \pi f C$ and $\phi_i = \phi_v + \frac{\pi}{2}$, we can equivalently write $A_v = A_i \cdot \frac{1}{2 \pi f C}$ and $\phi_v = \phi_i - \frac{\pi}{2}$.

$$ \begin{equation} V = \left[\begin{array}{c} A_v\\ \phi_v \end{array}\right] = \left[\begin{array}{c} A_i \cdot \frac{1}{2 \pi f C}\\ \phi_i - \frac{\pi}{2} \end{array}\right] \label{eq:vvi-cap} \end{equation} $$

Here's a plot of the voltage and current in an $C = 1$ mF capacitor when the current varies as a sine-wave with $A = 1$ ampere, $\phi = \pi/6$ radians, and $f = 100$ Hz:

At this point, we can speculate the relationship between the sine-wave voltage and current of a *general* LTI component using our *specific* equations for resistors, inductors, and capacitors. Comparing equations $\eqref{eq:vvi-res}$, $\eqref{eq:vvi-ind}$, and $\eqref{eq:vvi-cap}$, the amplitude of the voltage sine-wave always equals that of the current sine-wave scaled by some factor, and the phase of the voltage wave equals that of the current wave shifted by some amount (zero in the case of the resistor). By logical induction, we write the general vector-voltage/vector-current relationship for an LTI component as

$$ V = \left[\begin{array}{c} A_v\\ \phi_v \end{array}\right] = \left[\begin{array}{c} A_i \cdot |Z|\\ \phi_i + \angle{Z} \end{array}\right] $$

where $|Z|$ and $\angle{Z}$ are constants that depend on the nature of the component and possibly the sine-wave oscillation frequency. Though arrived at through induction here, this equation does indeed hold for all LTI components.

Engineers call the factor $|Z|$ the component's *impedance magnitude* and $\angle{Z}$ the component's *impedance angle*. These two quantities constitute the impedance of the component, denoted $Z$; in union, they relate the sine-wave voltage present across the component to the sine-wave current flowing through the component by amplitude gain and phase shift. Combining these two quantities in 2-vector form, we can write a component's impedance as

$$ Z = \left[\begin{array}{c} |Z|\\ \angle{Z} \end{array}\right] $$

It looks like we've found the impedance of a general LTI component, but there's one last snag.

While we can represent sine-wave voltage, sine-wave current, and impedance using 2-vectors, there's an issue. Impedance is defined as the quantity that relates the voltage across a component to the current in that component through *multiplication*—but how do we multiply 2-vectors such that the impedance equation works? We multiply the magnitude parts and add the angle parts of the vectors:

$$ V = Z \cdot I = \left[\begin{array}{c} |Z|\\ \angle{Z} \end{array}\right] \cdot \left[\begin{array}{c} A_i\\ \phi_i \end{array}\right] = \left[\begin{array}{c} |Z| \cdot A_i\\ \angle{Z} + \phi_i \end{array}\right] $$

This is clumsy to write, and we can better represent impedance and sine-wave voltage/current with *complex numbers*.

**While we could invent a 2-vector space and define vector operations to satisfy the behavior of LTI components, complex numbers offer a convenient solution for representing impedance.**

Like a 2-vector, a complex number consists of two parts: a magnitude and an angle. A complex number $X$ with a magnitude $A$ and an angle $\phi$ is written as

$$ X = A\angle\phi $$

where $\angle$ reads as "angle". In the *complex plane*, $X$ is drawn at angle $\phi$ counterclockwise from the horizontal axis and distance $A$ from the origin.

This is the polar form of a complex number. The horizontal and vertical axes of the complex plane represent rectangular components of a complex number, respectively called the real and imaginary parts. We can write $X$ in rectangular form using some trigonometry:

$$ X = A\angle\phi = A \cos(\phi) + j \cdot A\sin(\phi) $$

were $j=\sqrt{-1}$ is the imaginary unit. This is similar to vector notation using unit vectors $\hat{x}$, $\hat{y}$, and $\hat{z}$.

Euler's Formula lets us write a complex number in a third form called the complex exponential form:

$$ X = A \cos(\phi) + j \cdot A\sin(\phi) = A e^{j\phi} $$

where $e=2.71828...$ is Euler's number. We can see an important property of complex numbers using the complex exponential form.

**Multiplying a complex number $X_1=A_1 e^{j\phi_1}$ by another complex number $X_2=A_2 e^{j\phi_2}$ multiplies the magnitude of $X_1$ by the magnitude of $X_2$ and shifts the phase of $X_1$ by the phase of $X_2$:**

$$ X_1 \cdot X_2 = A_1 e^{j\phi_1} A_2 e^{j\phi_2} = A_1 A_2 e^{j(\phi_1 + \phi_2)} $$

If we represent the sine-wave voltage, current, and impedance in the impedance equation using complex numbers, this property "automatically" multiplies the magnitudes and adds the phases. There is no need to define a special way to multiply the quantities.

As a complex numbers, we write the voltage across an LTI component $V$, the current through the LTI component $I$, and impedance of the LTI component $Z$ as

$$ V = A_v e^{j\phi_v} $$

$$ I = A_i e^{j\phi_i} $$

$$ Z = |Z| e^{j\angle{Z}} $$

Then the impedance equation gives

$$ V = Z \cdot I = |Z|A_i e^{j(\angle{Z}+\phi_i)} $$

which is exactly the result we expect from our knowledge of the behavior of LTI components.

Using complex numbers and the previous results, we can now write the impedance of the basic passive components (resistors, inductors, and capacitors):

$$\boxed{ Z_R = R }$$

$$\boxed{ Z_L = j2\pi f L }$$

$$\boxed{ Z_C = \frac{1}{j2\pi f C} }$$

As expected, the resistor's impedance does not have an imaginary part ($j$ does not appear in the expression) since the voltage across and current through the component are in phase. By contrast, the impedances of the inductor and capacitor are entirely imaginary, indicating the $\pm \frac{\pi}{2}$ radian phase difference between the voltage and current sine waves. The inductor's and capacitor's impedances also vary with frequency, and being aware of this frequency-dependent behavior is handy when working with analog circuits.

**The impedance of an inductor increases with frequency (it blocks high-frequency sine waves, but passes low-frequency sine waves) and the impedance of a capacitor decreases with frequency (it passes high-frequency sine waves, but blocks low-frequency sine waves.)**

At last we've arrived at a single quantity that efficiently describes the relationship between the sine-wave voltage and sine-wave current in general LTI circuit components (resistors, inductors, and capacitors, among others.) Using impedance as a resistance-like quantity, we can apply Ohm's law and techniques like Thévinin equivalent circuits and the node-voltage method to examine LTI circuits.

**In the tangible sense, impedance is the quantity that relates the sine-wave voltage present across a component to the sine-wave current flowing through the component. Impedance generalizes the concept of resistance in Ohm's law to include elements such as inductors and capacitors, provided they are driven by sinusoidal voltages and currents.**

(The tangible qualifier means "physically observable" and distinguishes this flavor of impedance from another type that doesn't have a direct physical interpretation: "s domain" impedance defined in terms of the Laplace transform, which is useful when working with transfer functions.)

I hope this article helps you illuminate the statements "Impedance is like resistance for capacitors and inductors," "Impedance is the measure of a circuit's opposition to current flow expressed as a complex quantity," and "Impedance is the effective resistance of a component to alternating current". All of these statements are in fact true. Thanks for reading!

When posting comments, please follow these guidelines:

- Avoid short comments such as "thanks", "nice", or "cool article".
- Keep comments professional.
- Write comments in English.