In a recent post I sung the praises of square waves as a way to get a heckuva lot of power (3 dB more power than a sine wave) into a sample-range-limited signal.  It's time to take them down a notch now.

A problem with square waves is they're impossible to generate in the analog domain.  In fact, you can't even get close.

Signals generated in the analog domain are subject to physical laws regarding continuity (no teleportation) and whatnot. A common way to model these is to express (periodic) analog signals using a basis of sine waves with integral periods.  Suppose I want to generate the square wave:

f(x) = 1, -π < x < 0
-1, 0 < x < π

Graphed below are some approximations of this function using sine waves as a basis.  Note that only odd values of n are used in the sums of sin(nx).


The sums converge to +/-1 quite well, but there are definite "ears" at x near 0 where there's overshoot.  This doesn't appear to die down.  A closeup of one of the "ears":


If anything, rather than dying down, the "ears" converge to max(fn) → about 1.18, or about 9% of the "jump" from -1 to 1.  (Dym and McKean, in their 1972 book Fourier Series and Integrals, get the 9% right but incorrectly assert that the convergence is to 1.09.)

This mathematical phenomenon - Gibbs' phenomenon - is a good illustration of the difference between convergence of a series of functions and uniform convergence.

In this case, the series of partial sums pointwise converge to the square wave... for any given point x > 0 and ε > 0, the ear will eventually move to the left, and I can choose an N such that fn(x) is within of ε of 1 for all n > N...

... but the series does not uniformly converge to the square wave.  The following assertion is false: "for any given ε > 0, I can pick an N such that fn(x) is within of ε of 1 for all n > N and all x > 0."  This can be demonstrated by picking ε = 0.17, say.  For any n, even, say, n = 10100, there is an x close to 0 where f1e100(x) > 1.17.