======================================================================== Cybernetics in the 3rd Millennium (C3M) --- Volume 6 Number 1, Jan. 2007 Alan B. Scrivener --- www.well.com/~abs --- mailto:abs@well.com ========================================================================

Everything Has To Go Somewhere

~ OR ~

Eigenvectors and You

[There was no November 2006 issue for a number of reasons, including a 4-week "hold" on some Apple laptop power supplies, heavy business travel, and the previously mentioned "other things I gotta' do."]


Reading maketh a full man conference a ready man and writing an exact man. -- inscribed in the lobby of the Library of Congress For those of you who are new subscribers, last time (vol. 5, num. 5, see archives below) I polled my readers on which topics they would like me to write about. Hoo boy. The readers have spoken. I tabulated all of your responses (thanks you very much) and summarized them in an HTML file: www.well.com/~abs/Cyb/4.669211660910299067185320382047/votes.html The number one choice is, much to my astonishment, "Everything Has To Go Somewhere" ~ OR ~ "Eigenvectors and You." This is problematic because when I came up with that glib title I didn't know what Eigenvectors were. Sure, I USED to know, back in 1990, when I took class in control engineering from UCLA extension. But in the intervening 16 years the knowledge faded. I was bluffing. I didn't think you all would pick that one. Well, have no fear, I've jumped back into the subject, and with the help of some textbooks and the Internet I have come to understand the true meaning of Eigenvectors better than ever.


"This is some kind of plot, right?" [asks Slothrop.] "Everything is some kind of plot, man," Bodine laughing. "And yes but, the arrows are pointing all different ways," Solange illustrating with a dance of hands, red pointing fingervectors. Which is Slothrop's first news, out loud, that the Zone can sustain many other plots besides those polarized upon himself . . . that these are the els and buses of an enormous transit system here in the Racketenstadt [Rocketcity], more tangled even than Boston's -- and that by riding each branch the proper distance, knowing when to transfer, keeping some state of minimum grace though it might often look like he's headed the wrong way, this network of all plots might yet carry him to freedom. He understands that he should not be so paranoid of either Bodine or Solange, but ride instead their underground awhile, see where it takes him. . . . -- Thomas Pynchon, 1973 "Gravity's Rainbow" (novel) p. 603 ( www.amazon.com/exec/obidos/ASIN/0140188592/hip-20 ) First an aside: in researching this 'zine I wanted to use the above quote, but despite having a homemade index of the novel I couldn't locate the passage in the stream-of-consciousness text. All I remembered was Slothrop and the "fingervectors," but not whose fingers they were. I thought it was when Slothrop met Saure Bummer, the German cat burglar, or Der Springer, the film director turned black marketer. No dice. I called a few friends. TS said it was near the beginning when Slothrop met the Argentine anarchists who'd stolen the German U-boat. Nope. TB said it was in the dream sequence near the end, with all the Spy vs. Spy stuff. Couldn't spot it there either. Finally in desperation I asked my speed-reading wife to do a brute-force search, and she found it in the scene where Slothrop is in the baths with the German prostitutes and their clients. But along the way I did some Googling and made some interesting peripheral discoveries. A Pynchon message board had some nice concise definitions of Eigenvectors: ( www.hyperarts.com/pynchon/v/extra/eigenvalue.html ) A reviewer ( www.eye.net/eye/issue/issue_05.15.97/plus/books.html ) of the novel "Mason & Dixon" (1997) by Thomas Pynchon ( www.amazon.com/exec/obidos/ASIN/0312423209/hip-20 ) mentioned his use of vectors and other mathematical abstractions as literary metaphors: Pynchon approaches history with the eye of an engineer, on the lookout for vectors, forces and gradients. But it is not a mechanistic view. Like Gravity's Rainbow, Mason & Dixon is rich with metaphors like this one, the response of one of the narrator's cousins to his aside that astronomical measurements are a form of "celestial trigonometry, by which the telescope transports us out into the sky to the object we wish to examine." "A Vector of Desire," murmurs the boy. Pynchon is drawn to metaphors like this, equating human behavior or emotions to mathematical relationships, but in a sense that draws out either their apparent inevitability, or our frail hope that we might understand the human toll of our actions as easily as we look up a telephone number. Bearing in mind that the similarly diversion-prone Herman Melville wrote a very big book that is only somewhat about a big white whale, Mason & Dixon is an account of the lives of two men whose work is a quest for measurement, precision and observation, playing their parts in a larger search for knowledge of the world around and the world within. And of course a few weeks ago his latest was published, "Against the Day" (2006) ( www.amazon.com/exec/obidos/ASIN/159420120X/hip-20 ) with a length of 1085 pages, and it concerns -- among other things -- high-dimensional vectors and their significance. ( www.complete-review.com/reviews/popus/pynchon.htm ) I have written about these in several other issues, including the two-parter on Wolfram, ( www.well.com/~abs/Cyb/4.669211660910299067185320382047/c3m_0102.txt ) ( www.well.com/~abs/Cyb/4.669211660910299067185320382047/c3m_0201.txt ) the one on the 2002 SIGGRAPH Conference, ( www.well.com/~abs/Cyb/4.669211660910299067185320382047/c3m_0206.txt ) and the one on simulation. ( www.well.com/~abs/Cyb/4.669211660910299067185320382047/c3m_0502.txt )


Lobster sauce though a necessary adjunct to turbot, is entirely unwholesome. I never ask for it without reluctance. I never take a second spoonful without a feeling of apprehension on the subject of possible nightmare. This naturally brings me to the subject of Mathematics,... -- Lewis Carroll quoted in "Introduction to Continuous and Digital Control Systems" (book, 1968) by Roberto Saucedo and Earl E. Schiring ( www.amazon.com/exec/obidos/ASIN/B000H4H4WG/hip-20 ) I've told the story before of how I wanted to learn systems theory in college and couldn't seem to find it anywhere, but I dropped linear algebra, and later stopped being a math major, because I didn't "get" what the point was of state spaces and ordinary differential equations (ODEs). What nobody told me was THIS WAS THE MATHEMATICAL LANGUAGE OF SYSTEMS THEORY! Duh! What a fiasco. It took me about 20 years to correct this mistake. To do it the traditional way, one would learn arithmetic and geometry in grade school, algebra and analytical geometry plus some pre-calculus in high school, then in college introductory calculus, calculus of multiple variables, calculus of complex variables, linear algebra, ordinary differential equations (ODEs) and the dynamical systems theory, by which time you would need to be an upper division math major to get into the classes. The tragic thing about this structure is that there a lot of other students who could benefit from the systems theory, if it wasn't so hard to get to. For example, pre-med students are only required to take introductory calculus (and usually a "bonehead" version of that!) But as medical students it would benefit them greatly to have their mental toolkits include mathematical models of systems -- in fact, some medical schools even teach these models, not counting on the universities to prepare the students. But what I'm aiming at now is some practical advice for people who have finished their formal education, and just want to learn systems theory on their own.


If a scribe makes an error in the transcription of a royal edict, he shall be [text unintelligible]. -- "The Code of Hammurabi" translated by Doug Kenny, National Lampoon I used to tutor other students in math, and one of the first things I told about them was the importance of legibility. You have to be able to read your notes. You have to be able to tell a scalar from a vector from a matrix, and a function from its derivative, especially when there are little dots or arrows or prime marks involved. I encourage people to learn the Greek alphabet, especially the lower-case letters, ( people.msoe.edu/~tritt/greek.html ) and be sure they can tell the lower case sigma, xi and zeta apart. (Note that lower case sigma is sometimes drawn more simply than in the above link, as in the "six sigma" logo.) ( decker.typepad.com/photos/uncategorized/sigma.jpg ) I also recommend learning calligraphy -- maybe take a class or get a book, but definitely get a pen, like the "Calligraphic Marker Style Pen, Medium Tip, Chisel Edge, Waterbase Black Ink" by Faber Castell/Sanford Ink Company. ( www.amazon.com/exec/obidos/ASIN/B0006YZQEA/hip-20 ) Practice drawing the Greek and Latin letters, and other mathematical symbols such as radical, del operator, integral, product sign and approximately equals. ( www-306.ibm.com/software/globalization/gcgid/arithspc.jsp ) When I was first learning calculus with some friends in the 12th grade, we had all read the science fiction novel "A Canticle for Leibowitz" (1959) by Walter M. Miller, ( www.amazon.com/exec/obidos/ASIN/0060892994/hip-20 ) in which, in the far future, after a nuclear holocaust, a Catholic monk named Francis faithfully copies a blueprint of an electronic circuit diagram, thinking it is a mysterious religious document, and then spends fifteen years making an illuminated manuscript of it. ( en.wikipedia.org/wiki/A_Canticle_for_Leibowitz ) This inspired us to -- on occasion -- don monks' robes and attempt to solve integrals using our calligraphic pens, as if it was some medieval religious ritual. Perhaps this is going a bit too far, "beyond the pale" as they say in Dublin, but I do recommend that if you study math you have a degree of reverence for the material. Something else that has helped me get a handle on math has been writing computer programs. Mathematicians are often ambiguous, in the name of generality. Is N an integer or a real number? Is X a scalar or a vector? Is S real or complex? Is K a constant or a variable, or even a function? But in computer languages the ambiguity must be removed before the program can run. The notations in computer languages are simpler, too. Symbols can't wander over the page like a division line, radical, or giant parentheses. Each character occupies its own little glyph, sort of a Golden Rectangle, in a pre-allocated slot on the line, usually 80 characters wide. Some operators get their own symbols, like: + addition - subtraction * multiplication / division & and | or (to use examples from C) while others are expressed like functions, such as: sin() sine cos() cosine tan() tangent sqrt() square root exp() exponential log() logarithm The Greek letters go away, but variables can have names with more than one letter, like lambda, and charactersPerLine, so the symbol-space is actually much larger. Spaces tend to be ignored, except inside names (where they are not allowed) and quoted strings (where they are treated as literals), and unlimited parentheses often save an otherwise difficult formula. You you never really understand an algorithm or formula, in my humble opinion, until you program it. That's when you collide with all the hazards, where "the rubber meets the road." For example, I've read many, many texts and web sites that say you can convert from x,y form to r,theta with r = sqrt(x^2 + y^2) theta = tan^-1(y/x) Where tan^-1 is the "inverse tangent" function. Ignoring the problem when x = 0, I must point out that when you divide y by x, the sign information of both is garbled, because: 1/ 1 = 1 1/-1 = -1 -1/ 1 = -1 -1/-1 = 1 Whatever you pass to tan^-1(), it won't know what quadrant the number is in, and may be off by 180 degrees (Pi radians). ( classes.yale.edu/fractals/labs/AffTransf/AffTransfBackground/Angles.html ) What's needed is a function I call "untan" that takes both x and y as arguments, so it can do the quadrant calculation itself: untan(x,y) = 0 if x > 0 and y = 0, untan(x,y) = tan^-1(y/x) if x > 0 and y > 0, untan(x,y) = Pi/2 if x = 0 and y > 0, etc.


Hamilton was looking for ways of extending complex numbers (which can be viewed as points on a 2-dimensional plane) to higher spatial dimensions. Hamilton could not do so for 3 dimensions: in fact later mathematicians showed that this would be impossible. Eventually Hamilton tried 4 dimensions and created quaternions. According to the story Hamilton told, on October 16 Hamilton was out walking along the Royal Canal in Dublin with his wife when the solution in the form of the equation i^2 = j^2 = k^2 = ijk = - 1 suddenly occurred to him; Hamilton then promptly carved this equation into the side of the nearby Broom Bridge (which Hamilton called Brougham Bridge.) Since 1989, the National University of Ireland, Maynooth has organized a pilgrimage, where mathematicians take a walk from Dunsink observatory to the bridge where, unfortunately, no trace of the carving remains, though a stone plaque does commemorate the discovery. -- Wikipedia entry for William Rowan Hamilton ( en.wikipedia.org/wiki/William_Rowan_Hamilton ) I've mentioned this before, but it bears repeating. The typical K-12 math curriculum tells students lies like: "You can't subtract a larger number from a smaller," and then a year or two later says: "Well, actually, you can." The kids get irate, and not without good reason. Nobody focuses on rules more than kids. We're messing with them when we pull these fast ones. It would be far better to say: "Here is a game we can play," and then next year say: "Here is another game." Nobody gets irate when you teach them how to play Rummy and Crazy Eights with the same deck. One way we get ourselves painted into a corner in our teaching of mathematics is when we insist that these mental constructs are DISCOVERED, not INVENTED. We seem to know better when it comes to games -- nobody claims Crazy Eights was discovered. But by insisting that mathematics is timeless -- almost like a deity -- we obscure the fact that it is an arbitrary human creation, subject only to some constraints, like a poem which must rhyme. One of the few authors who gets this right is Wolfram (in his discussion of the search for Extra-Terrestrial intelligence of all things). He argues that if we send the digits of Pi to the aliens, they won't understand them; the digits are too idiosyncratic to our approach to math. He advocates a search for all possible mathematical symbol systems, and finding the common elements to all of them, then sending THOSE to the aliens. So if you are one of the kids who was lied to in math class and never quite got over it, I'm here to reassure you that this isn't really about TRUTH, it's about CONSEQUENCES OF ASSUMPTIONS. And I invite you play a new game...


"I-it's about the shape of the tunnels here, Master." [In the Mittelwerk, where V2 rockets were assembled under a mountain, safer from British bombers.] "...I based the design on the double lightning-stroke, the SS emblem..." "But it's also a double integral sign, did you know that?" "Ah. Yes, Summa, Summa, as Leibnitz said." ...but [the architect's] genius was to be fatally receptive to imagery associated with the Rocket. In the static space of the architect, he might've used a double integral now and then, early in his career, to find volumes under surfaces whose equations were known, -- masses, moments, centers of gravity. But it's been years since he had to do anything that basic. Most of his calculating these days is with marks and pfennigs, not functions of idealistic r and theta, naive x and y. . . . But in the dynamic space of the living Rocket, the double integral has a different meaning. To integrate here is to operate on a rate of change so that time falls away: change is stilled. . . . "Meters per second" will integrate to meters. The moving vehicle is frozen, in space, to become architecture, timeless. It was never launched. It will never fall. ...the Rocket, on its own side of the flight, sensed acceleration first. Men, tracking it, sensed position or distance first. To get distance from acceleration, the rocket had to integrate twice -- needed a moving coil, transformers, electrolytic cell, bridge of diodes, one tetrode (an extra grid to screen out capacative coupling inside the tube), an elaborate dance of design precautions to get what human eyes saw first of all -- the distance along the flight path. -- Thomas Pynchon, 1973 "Gravity's Rainbow" (novel) ( www.amazon.com/exec/obidos/ASIN/0140188592/hip-20 ) To add insult to injury, after lying about the rules your teachers mostly did a crummy job of playing the game with you. I used to joke that in public school they inoculated you against knowledge, so you wouldn't catch it later in life. But an even better understanding of the educational system was provided by James Herndon in "How To Survive In Your Native Land" (1971), ( www.amazon.com/exec/obidos/ASIN/0671230271/hip-20 ) He said the purpose of school was not to teach but to separate the sheep from the goats. It doesn't matter if it's badly taught, the smartest kids will learn it anyway. Higher education is assumed to be in short supply, and only a few will earn their way in. Of course, this system is ill-equipped to educate in the Internet Age, when all levels of education are PLENTIFUL. What I would propose is getting some visually programmed systems simulation software, like Stella which is good for learning. Its price ranges from $1,899.00 for a commercial user down to $59.00 for a K-12 student, so it seems affordable, especially if it can be used for years. ( www.iseesystems.com/softwares/Education/StellaSoftware.aspx ) Teach students to model simple systems visually, and then let them play with the resulting simulations, adjusting parameters in real time as the models run. They WILL learn systems theory, and I'll bet you can do it with secondary school kids. ( maps.unomaha.edu/maher/GEOL2300/week11/geol2300Stella/introtoStella.html ) Let them learn the equations later, in high school.


When they went to the movies he would fall asleep. He fell asleep during Nibelungen. He missed Attila the Hun roaring in from the east to wipe out the Burgundians. Franz loved films but this is how he watched them, nodding in and out of sleep. "You're the cause-and-effect man," she cried. How did he connect together the fragments he saw when his eyes were open? -- Thomas Pynchon, 1973 "Gravity's Rainbow" (novel) ( www.amazon.com/exec/obidos/ASIN/0140188592/hip-20 ) Oh, I forgot to mention, I'm combining this topic with two others: "Calculus Without Proofs In the Digital Age" and "Visualizing Conformal Mapping With Java Applets" which seem to flow together (at least in my mind) with the Eigenvector stuff. The latter topic came about because I decided I needed another dose of rigor in my life, and so began trying to read (for the second time) "Complex Variables and the Laplace Transform for Engineers" (1961) by Wilbur R. LePage. ( www.amazon.com/exec/obidos/ASIN/0486639266/hip-20 ) It was tough sledding. I can tell you that LePage is NOT an engineer himself. It seems written from a mathematician's point of view. I would've preferred fewer proofs and more example problems. Because of the difficulty I was having, I frequently fell asleep while reading. My mind just switched off. I kept at it though, a half page at a time, and managed to finish the book, only partly comprehending what I'd read. One Sunday afternoon I was napping thusly after reading about conformal mapping, and how it is usually pictured, and I dreamed a new way to visualize the mapping algorithms. I dreamed it in detail and I dreamed the C code to create the data. I even had some variable names chosen. I dreamed the same dream several times repeatedly, in vivid detail. My unconscious wanted me to get this. Late in the afternoon I woke up, had some iced coffee, and wrote the C code. It worked as I had dreamed. ( www.well.com/~abs/Cyb/4.669211660910299067185320382047/conformal_mappings.jpg ) (Originally, because I was in a hurry to see the results, I produced data in a vector format for AVS software, but I later modified it to also produce vector files in a Wavefront format that could be parsed by a Java Applet, so I could share the result. More on this later.) Enjoy the pictures for now, and presently I will explain what conformal mapping is, after some prerequisites.


I liked the stuff on digital video, that made sense, but the whole thing about "the set of all possible systems" went ... [gestures of something flying over head]. -- a 'zine reader I got feedback that my explanation of systems theory on a checkerboard -- in C3M Vol. 5 Num. 2, Mar. 2006 "Even Better Than the Real Thing" ( www.well.com/~abs/Cyb/4.669211660910299067185320382047/c3m_0502.txt ) -- wasn't as clear as I'd hoped, so I'm going to try again. For those of you with a background in Information Science, what I'm describing is a FINITE STATE MACHINE. ( en.wikipedia.org/wiki/Finite_state_machine ) For the rest of you, what I'm describing is similar to the children's game "Chutes and Ladders." ( www.amazon.com/exec/obidos/ASIN/B00000DMF6/hip-20 ) I think my description last time suffered mostly from a lack of illustrations. So here it is, redone, with pictures. (I tried to hand-draw these at first, but then resorted to computer- generated images because in most cases it was faster. Note the arrow are missing their heads; I'm sure you can figure out where they go.) A Very Simple Game Have you ever seen the prank where you hand someone a card that says "how to keep an idiot occupied for hours (see over)" printed on both sides? Well this idiotic game is like that, only it's played on a checkerboard. Each square is numbered 1 to 64. ( www.well.com/~abs/Cyb/4.669211660910299067185320382047/checkerboard.html ) On each square is a small card thats says something like "go to square 18" or some other number from 1 to 64. In each round of the game, there is a different set of cards on the checkerboard. You play by placing your marker (perhaps a miniature Empire State Building) on one of the squares (called the 'current state of the system'), and then following the instructions on the cards one after another. Imagine in one round every card says "go to square 1" and so clearly you have one square that you always end up on, and then you stay there. In systems theory is this is called an "attractor." ( www.well.com/~abs/Cyb/4.669211660910299067185320382047/cb1.jpg ) Imagine if in another round the left half of the board pointed to square 1, and the right half pointed to the opposite corner, square 64. Now the state space is divided into two "basins" each with its own attractor. ( www.well.com/~abs/Cyb/4.669211660910299067185320382047/cb2.jpg ) Or imagine if each square pointed to the one above or to the right or both, until all jumps ended up on the top or right side. The square in the lower left corner with so many jumps leading away from its its vicinity is called a "repellor." ( www.well.com/~abs/Cyb/4.669211660910299067185320382047/cb3.jpg ) (The purple lines going off the board up and to the right are a mistake; please disregard.) Or imagine that all squares in the interior point to an edge square, and all the edge squares are joined in a chain that goes around the perimeter clockwise (i.e., on the bottom row each square points to the one to the left, meanwhile on the left edge each square points to the one above it, and so on). Now we have an "orbit" which in this case is also an attractor. ( www.well.com/~abs/Cyb/4.669211660910299067185320382047/cb4.jpg ) It is amazing the number of distinctions that can be drawn by studying this idiotic little game. This approach is largely the one in Ross Ashby's classic "An Introduction to Cybernetics" (book 1956) ( www.amazon.com/exec/obidos/ASIN/0416683002/hip-20 ) which is back in print and also free on-line. ( pcp.lanl.gov/ASHBBOOK.html ) And . . . he continues the analogy while generalizing to the continuum (infinitely many states) thereby deriving the whole of cybernetics. I have reproduced pages 22 & 23 from Ashby. ( www.well.com/~abs/Cyb/4.669211660910299067185320382047/Ashby23.jpg ) Kindly disregard the sentence fragment that starts in the middle with: an ants' colony we might observe all the changes that follow the placing of a piece of meat nearby (intriguing though that fragment may be), and begin reading where it says: Suppose, for definiteness, we have the transformation U: | A B C D E V D A E D D In Ashby's notation, this defines a TRANSFORM named "U" that takes an initial condition of A, B, C, D or E (the STATE of the system) and modifies it as indicated. This differs from my chessboard representation in that the states are not in a 2-dimensional grid (8x8) but instead a 1-dimensional array, like half an egg carton, or one row of the chessboard. But it doesn't matter; all the ideas and conclusions you can derive are the same. The only other thing you need know to understand the page is the notion that applying a transform repeatedly (ITERATING it) is represented by U^n, where n is the number of iterations. (You don't have to do the problems; you can stop with the paragraph that ends: These matters obviously have some relation to what is meant by "stability", to which we shall come in Chapter 5. It is almost breathtaking how the core ideas of systems theory are presented in these two pages.)


A function consists of two sets, a domain-set and a range-set, and a function-machine that follows these rules: RULE 1. The function machine can only process things in the domain set and only produce things in the range-set. RULE 2. The function-machine is unambiguous: the same input always produces the same output. -- Swann & Johnson, 1975 "Professor E. McSquared's Original, Fantastic and Highly Edifying Calculus Primer" ( www.amazon.com/exec/obidos/ASIN/0913232173/hip-20 ) In my searches for better ways to learn (what Marshall Thurber called "superlearning") I happened upon a comic book called "Professor E. McSquared's Original, Fantastic and Highly Edifying Calculus Primer" -- I think it was in the Whole Earth Review. It teaches standard first year college calculus (the calculus of Newton with notation by Leibnitz) but in a witty and charming way, with some funny characters. It follows the standard sequence of sets, inequalities, limits, derivatives, and integrals, but with few proofs, and it does a particularly good job of making LIMITS easy to understand. Usually they are almost baroque in their definition, but Prof. McSquared compares them to a "guarantee" that you can find a delta for every epsilon. ( www.well.com/~abs/Cyb/4.669211660910299067185320382047/EMC0002.jpg ) There is also a clever and funny representation of FUNCTIONS, as two robots wearing sneakers named Grover and Alfred, representing g() and f(), with each of them having a chute for a number to go in, and a trap door where another number comes out. ( www.well.com/~abs/Cyb/4.669211660910299067185320382047/EMC0001.jpg ) Later (if memory serves) this concept is extended to a TRANSFORM, which is a bigger robot which takes in a function robot and puts out a different function robot. This comic is one of the best resources available for learning calculus using the symbolic, analytic approach. But just this year I realized you could put together a much simpler calculus curriculum if you just dealt with the digital case. Without infinitesimals, without classical analysis, without proofs, it becomes almost trivial. For example, our old buddy Galileo, in the process of inventing scientific method, measured falling bodies and concluded that, absent air friction, on the Earth's surface, a body falls 16*t^2 feet in t seconds. (Read the expression as "16 times t squared.") Today we could do this experiment with laser rangefinder aimed straight down at a falling ball bearing and sampled once per second. Rounding off all fractional parts we would probably get these measurements in feet: 0 16 64 144 256 400 What we have here is a FUNCTION of time. Let's call it g (for gravity, or maybe Grover the robot with sneakers). It's a function of time t, so we could write it as: g(t) = 16*t^2 IF WE KNEW THE FORMULA. Otherwise all we know is g(0) = 0, g(1) = 16, g(2) = 64, and so on up to g(5) = 400 -- all the data we have measured. In the process of inventing the calculus, Newton and Leibnitz arrived at the analytical conclusion that you can TRANSFORM g(t) into another function, g'(t) (say "g prime of t") called the DERIVATIVE of g, representing the RATE OF CHANGE of g, which is also the SLOPE OF G. They formulated rules for taking derivatives, which in this case transforms: g(t) = 16*t^2 into: g'(t) = 32*t If you are representing the POSITION of an object in feet at time t in seconds with g(t), then g'(t) is the VELOCITY of the object at time t in feet per second. ( www.glenbrook.k12.il.us/gbssci/phys/CLass/1DKin/U1L3a.html ) Or, you can skip the analysis, and just go through the list subtracting each number from the next. This yields: 16 48 80 112 144 Unfortunately it's the wrong answer, but the error is always 16. If you add 16 to each number in the list you get: 32 64 96 128 160 which is the right answer. This can be OK because the problem vanishes in the next derivative (as we shall see) and you can make the error smaller by making the sample time smaller, say tenths of seconds. Next Newton and Leibnitz defined the SECOND DERIVATIVE, in this case g''(t) = 32 In representing objects this is the ACCELERATION, or RATE OF CHANGE OF RATE OF CHANGE. In our approximate digital representation, we just take the above list and again subtract each number from the next, getting: 32 32 32 32 Aha! A free falling body has CONSTANT ACCELERATION. We can also run the process the other way. If we start with the second derivative list (of all 32s) and add each number into a running total as we go, we get the first derivative again. If we do it again we get the original list of positions. This process is called finding the INTEGRAL, and it is the opposite (or "inverse transform") of the derivative. It represents the area under the curve, and can be used to find averages, among many other uses. ( www.wpi.edu/Academics/Depts/Chemistry/Courses/General/figD-2.html ) So, what's wrong with the digital approach? Well, as we saw, there are errors. We have no idea of the derivatives and integrals outside the range in which we have samples. If we need more detailed information (about intervals smaller than the step size) we don't have it. But all of these problems are made less severe if there are more samples of smaller sizes over a larger range. And Moore's Law (computing power doubles every 18 months) keeps making this cheaper and easier. What's nice about the digital approach is it's easy to understand. Think of it as mathematical First Aid. As a Boy Scout I was taught how to apply a band-aid and when to call a real doctor. We should be teaching our students how to do quick-and-dirty digital calculus, and when to call a real mathematician.


A differential equation gives the rule by which the state of the system determines the changes of state of the system, which then determine its future evolution. -- Alan Garfinkel, 1983 "A Mathematics for Physiology" in "American Journal of Physiology" ( www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&list_uids=6624944&dopt=Abstract ) ( www.as.wm.edu/Faculty/DelNegro/cbm/GarfinkelAJP1983.pdf ) Once you have a grasp of the concept of a derivative, such as g'(t), it becomes possible to describe system theory as envisioned by Newton to solve the problem of computing planetary orbits. He invented what we now call ordinary differential equations (ODEs), and he called "fluxions." His notation allowed us to write "the rate of change of t is zero" as: g' = 0 (He used a dot over the x instead of a "prime" mark, but I don't have that key on my keyboard.) This then is an ODE, and we can solve it by guessing a function that meets the requirement, such as g = k where k is some constant, like 32. Note that Newton has compressed the g(t) notation by taking out the (t). In many bold moves in math, notation has been made more minimal. (The man was a genius, what can I say?) In general we will be given g' = {some expression of g and/or t} and we will SOLVE the ODE by finding (by guessing if necessary) a g that fits for all t. A more complex ODE would be: g' = g At first it seems like a mind-bender: the rate of change of g IS g. What function is its own slope? Well, one answer is g = 0. If g is always zero, it never changes, so its rate of change is zero. But Mr. Genius "I'm Isaac Newton and You're Not" wants the complete answer, an EXPONENTIAL function, t to the power of some base. g = e^t It turns out the base is an irrational number about 2.7, called e or "Euler's Number." ( en.wikipedia.org/wiki/E_%28mathematical_constant%29 ) What? Well, don't worry about that. Once again, let's throw out analysis and take the digital approach. Instead of looking at g' = {some expression of g and/or t} as a guessing game (find the function g that fits for all t) think of it as a set of instructions for deriving the values from any INITIAL CONDITION. For example, if we have: g' = 0 and know that g at time t = 0 is 13, then the solution is: g = 13 The value of g is 13 for all t, and its rate of change is zero everywhere. We generate the values of g, called INTEGRATING the ODE, by adding zero to 13 over and over. If we have: g' = g and we know g = 1 at time t = 0, then we get g at t = 1 by adding the rate of change at time zero, 1, to the total, 1 to get 2. We are following the instructions, what Garfinkel called the change rules, to integrate the ODE. Next, to get g at time t = 2, we take the value 2 at time t = 1, add the rate of change 2 at time t = 1, and get 4. Continuing we get the series: 1 2 4 8 16 32 ... an exponential series! The base is 2 instead of e, but this creates only an error of constant scale that can be made arbitrarily small by increasing the step size. This technique can be generalized to multiple equations of multiple variables. You start with a VECTOR which is the initial condition, and use the change rules to change each value at each time step. For example, the system of equations: a' = b b' = -a Analysis shows that any sinusoidal function for a, such as sine or cosine (with angle measured in radians) will satisfy these ODEs. ( en.wikipedia.org/wiki/Trigonometric_functions#Definitions_via_differential_equations ) Or you can take the digital approach again and just integrate the formulas: at each time step new a = old a + old b, and new b = old b - old a. Starting with initial conditions of 0 and 1, this gives the series: 0 1 1 1 2 0 2 -2 0 -4 -4 -4 -8 0 -8 8 0 16 16 16 32 0 32 -32 0 -64 -64 -64 -128 0 -128 128 0 256 256 256 512 0 Well, things are exploding here, in what engineers call "hunting" or "overcorrection." ( www.well.com/~abs/Cyb/4.669211660910299067185320382047/sin_approx1.jpg ) But, again, if you reduce the step size the problem becomes less severe. With only ten times as many steps the equations begin to take on pretty good approximations of sine waves. ( www.well.com/~abs/Cyb/4.669211660910299067185320382047/sin_approx2.jpg ) The textbook example of this type of system from control theory is a MECHANICAL OSCILLATOR, such as a pendulum with a long arm relative to its swing path, or a mass on a frictionless table connected to a wall with a linear spring. Call the FORCE on the mass f. Newton said force = mass times acceleration, or f=ma. Since mass in this problem doesn't change, force is proportional to acceleration. Acceleration is defined as the rate of change of velocity, v, which is the rate of change of position, p. And the force due to the spring is equal to the negative position -p (the farther off center you are the more the force pushes you back) times a spring constant k. Setting m and k to 1 for simplicity, we have: f = a p' = v v' = a f = -p Simplifying we have: p' = v v' = -p which is our system of ODEs for harmonic motion. Huzzah! Once again we have seen systems that "do nothing, oscillate or blow up." These are the types equations modeled by a tool like Stella. ( www.iseesystems.com/softwares/Education/StellaSoftware.aspx ) This is the language of systems theory. If you use Newton's analysis it is very complex, and only a few of the systems of ODEs are solvable. (But it is beautiful, elegant, and powerful when it DOES work.) If you use the digital approach it is simple, easy to compute, works for every set of equations, and can be made to have minimal error with brute force, but it only gives you answer for a given initial condition and time step. You don't get the closed form solution that defines g for all t. Like I said, know when to call a mathematician.


Consider the following subtraction problem, which I will put up here: 342 minus 173. Now, remember how we used to do that, 3 from 2 is 9 carry the 1 and if you're under 35 or went to a private school you say 7 from 3 is 6 but if you're over 35 and went to a public school you say 8 from 4 is 6, carry the 1, and so you have 169. -- Tom Lehrer, 1965 "New Math" (comedy song) on the album "That Was the Year That Was" ( www.amazon.com/exec/obidos/ASIN/B000002KO7/hip-20 ) The other important mathematical tool for systems theory is what they call COMPLEX ANALYSIS, which is math done with those weird and wacky IMAGINARY NUMBERS. Actually, it's done with good old REAL NUMBERS multiplied by that ghostly supposed quantity, the square root of minus one, and then added to other real numbers, to make numbers called COMPLEX. Mathematicians often write them in the form: z = a + b*i But engineers tend to write: s = sigma + j*omega They use j instead of i because it is easier to read, especially if you're writing lots and lots of equations (as engineers sometimes do). They use sigma and omega to draw attention to the fact that when you raise a complex number to the power of e (that pesky Euler number) the REAL PART, sigma, acts like an exponent, while the IMAGINARY PART, omega, acts like an angle, resulting in sinusoidal variation at some frequency and amplitude. This is because of the Euler Formula, ( en.wikipedia.org/wiki/Euler%27s_formula ) perhaps one of the most Baroque in mathematics, which says: e^(j*omega) = cos(omega) + j*sin(omega) And this really reveals the power of complex numbers, and why they are so useful to engineers. They allow a compact method for expressing harmonic systems as well as compounding systems, and hybrids of the two, and do so in a way that is easy to do computations on. And as bizarre as these numbers may seem (and "you ain't seen nothin yet"), all of their behaviors follow logically from the simple assumption that: j = sqrt(-1)


In cartography, a conformal map projection is a map projection that preserves the angles at all but a finite number of points. The scale depends on location, but not on direction. Examples include the Mercator projection, the stereographic projection and the Lambert conformal conic projection. -- Wikipedia entry on "conformal" ( en.wikipedia.org/wiki/Conformal ) One of the chapters in "Complex Variables and the Laplace Transform for Engineers" (1961) by Wilbur R. LePage ( www.amazon.com/exec/obidos/ASIN/0486639266/hip-20 ) dealt with CONFORMAL MAPPING. This is a fairly of functions that take as complex numbers as input as well as produce them as output. In other words g(s) has real and imaginary parts that are each some some function of sigma and omega, the real and imaginary parts of s. One of the conceptual problems with such a COMPLEX FUNCTION is that it is hard to draw a picture of it. A simple function with real inputs and outputs can be drawn with a line graph. For example, the falling body distance function g(t) = t^2 is graphed as a familiar parabola. ( www.sparknotes.com/math/algebra1/quadratics/section1.html ) Even multiple functions of real values can be plotted together, on a line graph with different line styles or colors to identify each plot. ( www.teleologic.com/archives/fed-spending-graph.jpg ) But a complex function has too much information for a line graph. Each possible sigma and omega input has to generate a sigma and omega as output. The most commonly used technique to visualize these critters is to use uniform grid as input and look at how it is distorted. ( mathews.ecs.fullerton.edu/fofz/conformal/c10.htm ) ( math.fullerton.edu/mathews/c2003/complexfunreciprocal/ComplexFunReciprocalMod/Images/ComplexFunReciprocalMod_gr_166.gif ) Another technique is to take a recognizable image and perform the mapping on it. ( www.lactamme.polytechnique.fr/Mosaic/images/JFC.61.D/display.html ) This is a popular image processing technique, for creating surreal distortions of photographs ( www.flickr.com/photos/sbprzd/sets/72157594172266668 ) as well as more abstract patterns. ( www.brainjam.ca/fractals.html )


"We didn't know we couldn't do it." -- my father, 12/16/2006 telling about the early days of Pacific Southwest Airlines (PSA) While reading about conformal mapping, I began to think about a simple way to visualize complex functions. I've done a lot of work with interactive 3D graphics, so I find it easy to think in 3D. Even old-fashioned 3D vector graphics, now called wire-frame, can be remarkably rich in subtlety. And they are computationally cheap, making them real-time on the slowest of modern chips. So I came up with what seemed like an obvious idea: visualizing the mapping as connecting one chessboard with another using straight lines. I imagine it as being like sticks of uncooked spaghetti, each connecting a sigma and omega on the input board to another pair on the output board. Simple but potentially intricate. Then, as I mentioned, I fell asleep and dreamed I was writing C code to create the data files. I initially created data in an AVS vector format to look at the data on a local computer, but it was always my intention to create data for a Java applet that users could load from the web. Well, I finished the conversion and posted the result. ( www.well.com/~abs/Cyb/4.669211660910299067185320382047/Java/conformal.html ) Assuming the applet works in your browser, you can grab each 3D wire-frame with the left mouse button and rotate it, to view it from any angle. This should give a pretty good idea of the 3D structure of the set of lines in each case. I have provided a small menagerie of mappings. You will see that the IDENTITY function: g(s) = s has each line go straight across, staying horizontal, because each s maps to s. The TRANSLATE function: g(s) = s + c where c is a complex constant, has each line at the same angle and orientation to the chessboards, staying parallel. The SCALE function: g(s) = s*r where r is a real value, has the lines spreading out, diverging as they move away from the source chessboard. The ROTATE function: g(s) = s*j which uses the fact that multiplying by j rotates a complex point by 90 degrees counter clockwise, seems to create a curving spiral; it's hard to remember that each line is straight. The SQUARE function: g(s) = s^2 is interesting in that it visually shows the fact that both s and -s, when squared, yield the same number s^2. Each destination point has two lines leading to it, and the shape is symmetric about the j axis. The INVERSE function: g(s) = 1/s (where s is not the complex origin, 0 + 0*j, usually just called 0) is very intricate. If my grid wasn't sampled at intervals of 0.5, but instead with a smaller sample size, the lines would diverge even more. I don't have proof yet but I'm convinced this technique can contribute to an understanding of complex functions and mappings.


Black holes are where God divides by zero. -- Steve Wright I remember while reading LePage I got frustrated with something that I find in a lot of math books. I had a hard time seeing what the point was. Not of the whole book, but of each individual section. I couldn't tell hypothesis from conclusion, proof from example, and goals from means to a goal (e.g., theorems from lemmas). it seemed like one long narrative of notations and consequences with very little connection to the problem-solving that an engineer would need to do. The author did spend a lot of time on the convergence of transforms such as the Laplace Transform, that is, under what conditions the equations are useful, and when they "blow up" and produce useless infinities. I recognized that this was important. It's like the "warranty" for a mathematical technique: regions of non-convergence "void the warranty." This got me to thinking about how I would like to be taught math. I realized I want a vast, hyperlinked body of knowledge, modeled after the "Encyclopedia Galactica" ( en.wikipedia.org/wiki/Encyclopedia_Galactica ) in Isaac Asimov's "Foundation" trilogy, ( www.amazon.com/exec/obidos/ASIN/0739444050/hip-20 ) only focused on math, an "Encyclopedia Mathematica." The I remembered what Douglas Adams wrote in "The Hitch Hiker's Guide to the Galaxy" (book, 1979) ( www.amazon.com/exec/obidos/ASIN/0517226952/hip-20 ) ( www.mindgazer.org/dontpanic/thehitch.htm ) about the "Encyclopedia Galactica" and its fate: In many of the more relaxed civilizations on the Outer Eastern Rim of the Galaxy, the Hitch Hiker's Guide has already supplanted the great Encyclopedia Galactica as the standard repository of all knowledge and wisdom, for though it has many omissions and contains much that is apocryphal, or at least wildly inaccurate, it scores over the older, more pedestrian work in two important respects. First, it is slightly cheaper; and secondly it has the words DON'T PANIC inscribed in large friendly letters on its cover. So perhaps a more casual, less imposing name would be more inviting, like "The Websurfer's Guide to Mathematics." I see each page describing a tool, like the Quadratic Equation ( en.wikipedia.org/wiki/Quadratic ) which solves: a*x^2 + b*x + c = 0 as: x = (-b +/- sqrt(b^2 - 4*a*c))/2*a First and foremost would be the WARRANTY for the tool. This one's warranty would say that a, b and c must be real numbers, and 'a' cannot be zero, because then when we divide by 2*a it's division by zero. Also, each tool should come with PREREQUISITES, and a QUIZ, so you can tell if you are ready to understand it, and know where to go first if you're not ready. I would want each tool to come with a proof, ON A SEPARATE PAGE, so readers could be free to ignore it. But if they want to dig in, each step in the proof should hyperlink to the RULE that justifies it. And each tool needs EXAMPLES WITH NUMBERS. I remember reading texts that cover linear programming, such as "Introduction to Operations Research" (1967-2001) by Hillier & Lieberman, ( www.amazon.com/exec/obidos/ASIN/0071181636/hip-20 ) and encountering typical production problems something like this: We are producing three products, p1, p2, and p3, and they sell for $2, $3, and $8 respectively. Our goal is to maximize our revenues. Each product has a per unit production cost of $1, $2, and $5, respectively, and we have a budget of $300. We have demand for the products that requires that we produce a combination of at least 50 units of p1 and p2. Also we have exactly 400 hours of available production time and each unit requires 2, 4, and 5 hours of production time, respectively. (The numbers p1, p2 and p3 must all be greater than or equal to zero.) What mix of products will yield the highest profit? The texts would then show how to set the problem up in a commercial linear programming solver program (of which there are several, including the Microsoft Excel spreadsheet program!) and let it crank out the solution. But THEY DIDN"T GIVE THE ANSWER! (By the way, in this case it is p1 = 100, p2 = 0, and p3 = 40, which has a value of $520.) For the example of the Quadratic Equation, a simple numerical example would be for a = 1, b = 2 and c=1: x^2 + 2*x + 1 = 0 therefore: x = (-2 +/- sqrt(2^2 - 4*1*1))/2*1 = (-2 +/ 0)/2 = -2/2 = -1 Plugging -1 back into the equation confirms the answer. Some history of the tool, again on a separate link, would be nice. Lastly I would like to see some type of interactive visualization of each tool. The Java Applets I wrote to visualize conformal mapping would be an example of this. For the Quadratic Equation I did a quick prototype that plots in a 3D space formed by a, b and c, and draws a sphere with radius proportional to x. Green spheres are the first real root, red are the second real root, and cyan and yellow are pairs of complex roots (which always occur in conjugate pairs) where sphere radius is proportional to the complex number's radius, r, or distance from the origin. ( www.well.com/~abs/Cyb/4.669211660910299067185320382047/quadratic_roots ) To complete this I would need to make it interactive. The history of the evolution of our big brains, as well as my own experience teaching visualization, indicate that hand-eye correlation are important to learning.


A gangster who loved to bet at the racetrack kidnapped a chemist, a mathematician, and a physicist to force them to find ways for him to find money at the track. He gave them all a month and he threatened to kill all three if they didn't come up with anything useful, then he locked them up in labs. The month expired, and the gangster first went to the chemist and said, "So, what do ya got for me?" The chemist said, "I've created this new variation on amphetamines that there's no test for because it's new. Give this to the horse before the race and it'll make him run faster, and at least for a while it'll be undetectable." The gangster said, "Great, go stand over there and wait." Then the gangster went to the mathematician and said, "So, what do ya got for me?" The mathematician said, "I've found some flaws in the way tracks calculate the betting odds. If you follow these instructions, it'll increase your chances of walking away a winner." The gangster said "Great, go stand over there and wait." At last the gangster went to the physicist and said, "So, what do ya got for me?" And the physicist said, "Consider a spherical horse in simple harmonic motion..." -- "Lets post nerdy jokes" ( www.world4ch.org/read.php/sci/1137494383/l40 ) Having slogged through LePage I picked up another book lying around the house "Introduction to Continuous and Digital Control Systems" (book, 1968) by Roberto Saucedo and Earl E. Schiring, ( www.amazon.com/exec/obidos/ASIN/B000H4H4WG/hip-20 ) which was given to me years ago by a friend I have since lost track of, last seen working at Electronic Arts in the 1980s. This book seemed like it WAS written by engineer, but I didn't have all the prerequisites. I dipped into it anyway, and learned a few more things. Control theory can be divided into analysis and design, or in other words, prediction and control. On the control side, the distinction is drawn: if you want to force a variable to be constant, that is a control system; if you want it to follow an arbitrary signal, that is a servomechanism. (If what you're after isn't a point attractor, you must be messing with robots?!) In analysis, there are the classical frequency methods (using transfer functions) and the modern state space methods. A textbook example of classical analysis in this text is a DRIVEN MECHANICAL OSCILLATOR WITH SPRING AND DAMPER. ( www.well.com/~abs/Cyb/4.669211660910299067185320382047/ICDCS0003.jpg ) ( en.wikipedia.org/wiki/Harmonic_oscillator ) One instance of this would be a mass and spring as above, with a motor as the driving force and a dashpot ( en.wikipedia.org/wiki/Dashpot ) as the damping force. Another instance is an electrical circuit with a resistor (R), and inductor (L), and a capacitor (C), called an RLC circuit. ( en.wikipedia.org/wiki/RLC_circuit ) These types of systems are popular with mathematicians because they are LINEAR, and therefore SOLVABLE. ( en.wikipedia.org/wiki/Linear_system ) As Wikipedia explains, a classical linear system is one in which the ... given two valid inputs, x1(t) and x2(t), as well as their respective outputs y1(t) = H(x1(t)) and y2(t) = H(x2(t)), then a linear system must satisfy alpha*y1(t) + beta*y2(t) = H(alpha*x1(t) + beta*x2(t)) for any scalar values of alpha and beta. A whole toolbox exists to analyze these systems, including the Laplace Transform for continuous systems, ( en.wikipedia.org/wiki/Laplace_transform ) and the Laurent Transform, also called the z-Transform, for analyzing systems producing pulses at discrete times (or systems sampled at discrete times). ( en.wikipedia.org/wiki/Laurent_transform ) The transforms are part of a family of methods based on a TRANSFER FUNCTION relating the output to input. From my notes in that control theory class at UCLA: 1) TF = Transfer Function = Input/Output ("black box") models SS = State Space = parametric or "grey box" models 2) TF assumes system is relaxed, only relates I/O info; it does not provide any info about system's internal interactions 3) TF may however be easier to obtain 4) TF is for single-input-single-output (SISO) & time invariant systems; SS models can be multi-input- multi-output (MIO) and time-varying 5) SS more suitable for digital simulation A related concept is the LINEAR SYSTEM OF DIFFERENTIAL EQUATIONS. ( www.egwald.com/linearalgebra/lineardifferentialequations.php ) The more recent state space tools (mostly 1960s vintage) are based on taking an arbitrary linear system of ordinary differential equations (ODEs), such as: w' = a1*w + a2*x + a3*y + z*a4 x' = b1*w + b2*x + b3*y + z*b4 y' = c1*w + c2*x + c3*y + z*c4 z' = d1*w + d2*x + d3*y + z*d4 (where x, y, z and w are variables while the values of a1 through d4 are constant) and expressing it as a MATRIX, in this case a 4 by 4 matrix: | a1 a2 a3 a4 | | b1 b2 b3 b4 | | c1 c2 c3 c4 | | d1 d2 d3 d4 | and then giving that matrix a name, like M. The important thing is that, in this context, M means the same thing as the 4 ODEs. (Again, the genius is in removing something.) The system is called LINEAR because each variable's rate of change is a sum of linear combinations of the system variables. A list of values for all of the variables, in this case specific numbers for the four values x, y, z and w, is called a STATE VECTOR and completely specifies the state of the system. The set of all possible state vectors (in this case a 4-dimensional space) is called the STATE SPACE of the system. As the system transitions from state to state within the state space, it traces a path called a TRAJECTORY of the system. Using matrix algebra, it becomes possible to find solutions to many such linear systems of ODEs.


Dan Bricklin has spoken of watching his university professor create a table of calculation results on a blackboard. When the professor found an error, he had to tediously erase and rewrite a number of sequential entries in the table, triggering Bricklin to think that he could replicate the process on a computer, using the blackboard as the model to view results of underlying formulas. His idea became VisiCalc, the first application that turned the personal computer from a hobby for computer enthusiasts into a business tool. VisiCalc went on to become the first "killer app", an application that was so compelling, people would buy a particular computer just to own it. In this case the computer was the Apple II, and VisiCalc was no small part in that machine's success. -- Wikipedia entry for "spreadsheet" ( en.wikipedia.org/wiki/Spreadsheet ) When I first tried to learn matrix algebra I found it baffling. Everything seemed trivially simple right up to the point where it became completely opaque. Again this is probably because of the way it is taught. Imagine if you signed up for a wood- working class and spent months making a chisel, awl, plane, drill, adz, and file without ever using any of them, or even being told what they are for. But you sure knew how to sharpen them! Matrix algebra comes with a similar set of sharp tools, and most classes I've taken and texts I've read that cover the material concentrate on tool-sharpening at the expense of building birdhouses (to stretch the metaphor). One of the first tools you learn about is the inscrutable DETERMINANT. It begins with a simple definition for the 2D case. As the Wikipedia entry for determinant ( en.wikipedia.org/wiki/Determinant ) so succinctly says: The 2 x 2 matrix A = | a b | | c d | has determinant det(A) = a*d - b*c The the determinant of a 3x3 matrix B | a b c | | d e f | | g h i | is defined as (take a breath) for ANY ROW the sum of each element times the determinant of the sub-matrix formed by taking all the elements which it doesn't share a row or column with. In other words: a*det(P) + b*det(Q) + c*det(R) where P = | e f | | h i | Q = | d f | | g i | and R = | d e | | g h | And it gets gnarlier from there. Note that I used the top row but you can use ANY ROW. This is part of the magic of linearity. Also note that you can take any linear combination of rows and replace any or all rows with them, and the determinant of the resulting matrix will be the same. For example, take A = | 3 2 | | 6 4 | the determinant is 3*4 - 2*6 = 0. If we replace the top row with 5 times itself, we get A* = | 15 10 | | 6 4 | so det(A*) = 15*4 - 6*10 = 0. Bingo! Linear combinations of linear things are generally equivalent. In attempting to "get my head around" matrix algebra during that oft-mentioned UCLA course, I found it useful to implement some matrix operations using a spreadsheet program, like Microsoft Excel. I've replicated a few of these and posted them for you to play with. The first computes determinants of both a 2x2 and 3x3 matrix. ( www.well.com/~abs/Cyb/4.669211660910299067185320382047/det2D3D.xls ) The next performs a rotation about the Z axis on a 3D vector. The 3x1 position vector is multiplied by a 3x3 rotation matrix to produce a 1x3 position vector. (In this one I churned out a few positions for a rotated point and plotted them as well.) ( www.well.com/~abs/Cyb/4.669211660910299067185320382047/rot.xls ) And lastly I compute a general multiplication of a 3x1 and 3x3 matrix to produce a 1x3 vector. ( www.well.com/~abs/Cyb/4.669211660910299067185320382047/vA.xls ) If you punch different numbers into the inputs and watch how the outputs change, it will help develop your intuition for matrix operations; even more so if you create your own spreadsheets.


In mathematics, an EIGENVECTOR of a transformation is a vector whose direction is unchanged by that transformation. The factor by which the magnitude is scaled is called the EIGENVALUE of that vector. Often, a transformation is completely described by its eigenvalues and eigenvectors. The EIGENSPACE for a factor is the set of eigenvectors with that factor as eigenvalue. In the specific case of linear algebra, the EIGENVALUE PROBLEM is: given an nxn matrix A, do there exist nonzero vectors x in R^n such that Ax is a scalar multiple of x? If so: the scalar is denoted by the Greek letter lambda and is called an EIGENVALUE of the matrix A, while x is called an EIGENVECTOR of A corresponding to lambda and the following has one or more solutions: Ax = lambda x. These concepts play a major role in several branches of both pure and applied mathematics -- appearing prominently in linear algebra, functional analysis, and to a lesser extent in nonlinear situations. -- Wikipedia entry for "Eigenvalue, eigenvector and eigenspace" ( en.wikipedia.org/wiki/Eigenvector ) So, did that quote from Wikipedia make sense? We are starting with a NxN matrix, which -- in its simplest interpretation -- is a set of linear rules for transforming an N-vector into another N-vector. What we want to know is: does an input vector exist which is ONLY SCALED when it is transformed by the matrix? There are techniques for determining if the eigenvectors exist, and if so finding them. These techniques use determinants from matrix theory and algebraic tools like the quadratic equation. ( tutorial.math.lamar.edu/AllBrowsers/3401/LA_Eigen.asp ) But what does it MEAN? So what if there are vectors that are only scaled by a matrix? These are the places in the state space that have their ANGLE UNCHANGED by the matrix. The Wikipedia entry for "Eigenvalue, eigenvector and eigenspace" ( en.wikipedia.org/wiki/Eigenvector ) has an illustration that is useful: the Mona Lisa under a "shear transform" (sometimes called "scissoring" or "racking") showing two eigenvectors which maintain their angles. I thought about the conformal maps I'd visualized, and which ones could be done with a 2x2 real matrix operating on a 2-vector, instead of algebra on a complex value s. (To review 2D matrix algebra: if input is |x, y| and matrix is | a b | | c d | then |a*x + c*y, b*x + d*y| is the output.) IDENTITY of course, g(s) = s can be done with the matrix: | 1 0 | | 0 1 | In this matrix every vector is an eigenvector with eigenvalue = 1. And the SCALE function, g(s) = s*r where r is a real number constant, is performed by the matrix: | r 0 | | 0 r | Also in this matrix every vector is an eigenvector with eigenvalue = r. I puzzled a while over ROTATE: g(s) = s*j (where j is the imaginary quantity). Its matrix is: | 0 -1 | | 1 0 | Solving for the eigenvalue I found the quadratic equation had a negative number under the radical (square root sign), so there were no real roots. That makes sense, because only the point at {0 0} is untransformed by the matrix, being the center of rotation, and that eigenvector is specifically disallowed by the warranty. The others, TRANSLATE: g(s) = s + c SQUARE: g(s) = s^2 and INVERSE: g(s) = 1/s are all non-linear and cannot be expressed with a 2x2 matrix. As I played with this concept, I asked myself what its deeper meanings were. What did the name mean? German for "self-vector." Talking to friends about it, I realized perhaps a better translation would be "ego vector." Sort of a "Triumph of the Will" concept. Under the slings and arrows of the transformation matrix only the powerful ego vectors maintain their direction! Also, I should note that if you type "eigenvector" into Google Images ( images.google.com ) you get some very interesting pictures.


An example of an unstable control system is the automatic temperature control of an electric blanket with dual controls, where the husband and wife each has the wrong temperature control. As the wife selects a higher desired temperature, the extra heat is applied to the husband, who reduces the temperature setting on his controller. However, this action lowers the temperature of the portion of the blanket on his wife, who in turn selects an even higher temperature on her controller. This process continues until the wife's side of the blanket is completely off and the husband's side of the blanket is at maximum temperature. In this example the controller quantity, temperature, moved in opposite direction to the desired value, and thus it represents an unstable system. -- Roberto Saucedo and Earl E. Schiring, 1968 "Introduction to Continuous and Digital Control Systems" ( www.amazon.com/exec/obidos/ASIN/B000H4H4WG/hip-20 ) One of the most common things an engineer needs to determine of a mathematical model of a system is its STABILITY. If you are using the old classical approach of transfer functions, you find a RATIONAL TRANSFER FUNCTION of the form: X(z) = P(z)/Q(z) and then determine when P(z) = 0 (the ZEROS of the system) and Q(z) = 0 (the POLES of the system) and plot them in the complex plane as a POLE-ZERO PLOT. ( en.wikipedia.org/wiki/Pole-zero_plot ) As Wikipedia explains: The region of convergence for a given transfer function is a disk, punctured disk, or annulus which contains no poles. If the disc includes the unit circle, then the system is BIBO stable. The referenced article on BIBO stability ( en.wikipedia.org/wiki/BIBO_stability ) defines: BIBO Stability is a form of stability for signals and systems. BIBO stands for Bounded Input/Bounded Output. If a system is BIBO stable then the output will be bounded for every input to the system that is bounded. (A signal is bounded if there is a finite bound B > 0 such that the signal magnitude never exceeds B.) This is a system that is guaranteed not to blow up if you don't feed it an already-blown-up signal. To tell you the truth this classical stuff seems like so much voodoo to me, operating in a FREQUENCY DOMAIN that I have never understood very well. But luckily, here in the digital age, we don't need to do that very much any more. Instead, using the state space approach, we express our systems as ODEs, then express the ODEs in matrix form, and then -- guess what? -- find the EIGENVECTORS and EIGENVALUES of the system! I got the following from "Nonlinear Ordinary Differential Equations" (book, 1977) by D. W. Jordan and P. Smith. ( www.amazon.com/exec/obidos/ASIN/0199208247/hip-20 ) (Why am I providing a linear solution from a book on non-linear equations? Well, it was near the front of the book, in a section on linearly approximating nonlinear systems.) [Given:] x' = a*x + b*y, y' = c*x + d*y It is known that there are nontrivial solutions of [these equations] of the form: x = r*e^(lamda*t), x = s*e^(lamda*t) where r and s are related constants. Lambda, of course, is an eigenvalue of the system (usually there are two). It is found as roots of the CHARACTERISTIC EQUATION: lambda^2 - (a + d)*lambda + (a*d - b*c) = 0 Now, the tricky thing is that sometimes these roots are complex. What? Eigenvalues can be complex? Why, yes, and that's when it gets interesting. ( www.sosmath.com/matrix/eigen3/eigen3.html ) In the last section I puzzled over a rotation matrix: | 0 -1 | | 1 0 | explaining that: Solving for the eigenvalue I found the quadratic equation had a negative number under the radical (square root sign), so there were no real roots. If I had continued anyway, I would've found two complex roots: lambda = j and: lambda = -j which makes possible a CIRCULATING ATTRACTOR, or an OSCILLATOR. It also enables a SPIRAL ATTRACTOR as explained in an on-line essay on "Systems With Complex Eigenvalues." ( ltcconline.net/greenl/courses/204/Systems/complexEigenvalues.htm ) (Scroll down to phase portrait for a picture of the attractor.) As explained in the on-line essay "The PhasePlane for a Linear system" ( www.math.pitt.edu/~bard/classes/xppfast/lin2d.html ) the locations of eigenvalues characterize the behavior of the system: [Looking] at the phaseplane for the two-dimensional linear system of differential equations: x' = ax + b y y'= cx + dy We know that the behavior of this system is completely determined by the eigenvalues of the matrix A whose entries are a,b,c,d. These are the normal possibilities: * Saddle point -- one positive and one negative eigenvalue * Unstable node -- two positive real eigenvalues * Stable node -- two negative real eigenvalues * Unstable vortex or spiral -- complex eigenvalues with positive real parts * Stable vortex or spiral -- complex eigenvalues with negative real parts In addition there are a number of degenerate cases: * Center -- a pair of pure imaginary eigenvalues * Degenerate node -- two identical eigenvalues * Line field -- a zero eigenvalue Once you allow this enhancement, you get the new-school definition of stability: A LINEAR SYSTEM IS STABLE IF ITS EIGENVALUES HAVE NEGATIVE REAL PARTS. ( en.wikibooks.org/wiki/Control_Systems/State-Space_Stability ) Since solutions are of the form k*e^(lambda*t), and recalling that e to a complex power sigma + omega*j is defined: e^(sigma + omega*j) = e^sigma*(cos(omega) + j*sin(omega)) Since we know a cosine is going to stay safely between -1 and 1, clearly sigma must be negative to keep the result from expanding without limit exponentially over time. In the on-line course notes for "MAS375: MODELLING AND SIMULATION" by Dr. Mark Lukas at Murdoch University, Perth, Australia, ( www.maths.murdoch.edu.au/units/mas375/unitnotes/unitnotes.pdf ) chapter 3, starting on page 23, has some interesting illustrations of software solutions to system equations. These days most on-line information about using eigenvalues to determine stability are part of course notes -- and especially lab notes -- involving using modelling software, such as Matlab or Mathematica to study systems theory. The on-line essay "Linear System of Differential Equations" from the University of Massachusetts Dartmouth shows solutions using the program TEMATH. ( www2.umassd.edu/temath/TEMATH2/Examples/LinearSystemsOfDiffEqs.html ) This is a great way to learn this stuff, and I recommend it.


It's control. All these things arise from one difficulty: control. For the first time it was inside, do you see. The control is put inside. No more need to suffer passively under 'outside forces' -- to veer into any wind. -- Thomas Pynchon, 1973 "Gravity's Rainbow" (novel) ( www.amazon.com/exec/obidos/ASIN/0140188592/hip-20 ) So far, since the section entitled "THE LINEAR CASE," we have been looking at the ANALYSIS side of control theory. But eventually an engineer must eventually move to the DESIGN side. This means taking a system that isn't behaving as desired and ADDING COMPONENTS (usually involving feedback) to create a NEW SYSTEM with the desired behavior in its stable modes. For example, adding a governor to a steam engine. The on-line publication "Design of Simple Digital Controllers" (August 1996) by Ming T. Tham of the Department of Chemical and Process Engineering at the University of Newcastle upon Tyne, UK, ( lorien.ncl.ac.uk/ming/digicont/control/digital1.htm ) summarizes the process. One of the limiting trade-offs that comes up in control system design is accuracy versus stability. As Saucedo and Schiring ( www.amazon.com/exec/obidos/ASIN/B000H4H4WG/hip-20 ) explain: Accuracy ... In the open-loop system the output ... depends on [the input] completely. Any imperfections or changes in [the input] from the nominal condition result directly in output voltage inaccuracy. However, for the feedback configuration, a change in the output from a desired value is reflected in an error signal ... The error signal causes a change in the output in a direction to bring the output toward the desired voltage. Thus changes in the output are detected and corrected by the feedback has greater accuracy. ... Stability Most physical systems are inherently stable open-loop, but the addition of feedback can cause the closed-loop system to be unstable. In fact, the greater the final accuracy desired, the less stable the system becomes. For high accuracy, the gain associated with [the input] should be high, so that even the smallest detectable error signal may be amplified and produce a correction to the output. However, for high values of forward path gain, the corrective action produced at the output can be too large, with resulting overshoot or undershoot of the output from its original offset condition. The error then reverses sign, and the corrective action also reverses. If the gain is too large, the output can start oscillating with either sustained or increasing amplitude, obviously an unstable situation. Thus the features of accuracy and stability are opposites in the sense that increasing accuracy decreases stability, and vice versa. In other words, the goals of EXACT REPRODUCTION and NOISE REJECTION are in basic conflict. Saucedo and Schiring also talked about "deadbeat response systems." These are systems in which the specified controller response is at discrete time steps; the system requirements specify values only at time "ticks" in a sample-data stream. ( www.well.com/~abs/Cyb/4.669211660910299067185320382047/ICDCS0002.jpg ) It turns out these systems have an odd failure mode, as in an example of a time-sampled controller for a missile launcher: ...the output sequence agrees with the actual launcher movement (at the sampling instants) and behaves as predicted. In between samples, however, the launcher oscillates severely. It does little good to point the launcher quickly and accurately at a hostile target and settle to zero steady-state error at the sampling instants and yet the remainder of the time, which is in the majority, to wave it haphazardly at the heavens. These oscillations, which exist in the real world and are not predictable by use of z transforms (and indeed are not seen by the digital computer), are called HIDDEN OSCILLATIONS. They hide behind the multitude of [terms]. ... This is characteristic of deadbeat response systems: they are highly tuned to a specific input function. ... Moreover, these high-frequency ripples, or hidden oscillations, would surely excite the mechanical resonances stated previously. The deadbeat idea, however, is intriguing and warrants further investigation. But it occurs to me that in the digital age, it might be possible to simulate such a super-sensitive controller while monitoring for instability, and switching to another, more stable technique (probably involving more time samples) when necessary. Indeed, a Google search turned up a commercial offering that uses deadbeat techniques in chemical engineering and process control. ( www.gossenmetrawatt.com/english/seiten/licenceforuseofdead-beatpdpicontrolalgor.htm )


Finally, there is a physical problem that is common to many fields, that is very old, and that has not been solved. It is the analysis of circulating or turbulent fluids. The simplest form of the problem is to take a pipe that is very long and push water through it at high speed. We ask: to push a given amount of water through that pipe, how much pressure is needed? No one can analyze it from first principles and the properties of water. If the water flows very slowly, or if we use a thick goo like honey, then we can do it nicely. You will find that in your textbook. What we really cannot do is deal with actual, wet water running through a pipe. That is the central problem which we ought to solve some day, and we have not. -- Richard Feynman, 1963 "The Feynman Lectures On Physics" ( www.amazon.com/exec/obidos/ASIN/0201021153/hip-20 ) quoted in "Turbulence in Nature and in the Laboratory" by Z. Warhaft ( www.pubmedcentral.nih.gov/articlerender.fcgi?artid=128565 ) Okay, deep breath. MOST OF THE ABOVE INFORMATION IS, IN THE GENERAL CASE, WRONG. It applies only to the linear cases. Many control engineers have spent careers modelling, analyzing and controller linear systems, only to have intuitive problems with the nonlinear ones. So how do you attack the nonlinear systems? Well, as it turns out, one fruitful approach has been behind the door marked "unexplained phenomena." In the early eighties, following a review in the CoEvolution Quarterly, ( en.wikipedia.org/wiki/CoEvolution_Quarterly ) I acquired a copy of "Sensitive Chaos: The Creation of Flowing Forms in Water and Air" (book, 1965) by Theodor Schwenk. ( www.amazon.com/exec/obidos/ASIN/1855840553/hip-20 ) This argument for a living universe from a Theosophy point of view relies mostly on photographs of living creatures compared with swirling water to argue that ALL WATER HAS CONSCIOUSNESS. Well, I don't know about that, but I enjoyed the pictures, and I began thinking of the book as a gallery of unsolved problems in fluid dynamics. ( jonathanmackenzie.net/aeoc/schwenk.htm ) Almost a decade later when I worked for Stellar helping to sell graphics supercomputers to scientists, I accepted every invitation to speak I received, and one of them was from USC's honors engineering students. I prepared a set of overheads that began with a slide from Schwenk's flowing, turbulent water pictures. "Does anyone know what this is?" I asked. Someone guessed, "Turbulence." "That's right," I said. The I showed a slide of the Cantor Set. ( en.wikipedia.org/wiki/Cantor_set ) "Does anyone know what this is?" I again asked. "Cantor dust," another student said. "Very good." As Wikipedia explains: The Cantor set is created by repeatedly deleting the open middle thirds of a set of line segments. One starts by deleting the open middle third from the interval [0, 1], leaving two line segments: [0, 1/3] and [2/3, 1]. Next, the open middle third of each of these remaining segments is deleted. This process is continued ad infinitum. The Cantor set contains all points in the interval [0, 1] that are not deleted at any step in this infinite process. I then suggested that what these two had in common, the observed turbulent flow and the theoretical "fractal" from 1884, was non- linearity. At the time, by hanging out with scientists doing Computational Fluid Dynamics (CFD) with supercomputers, I was learning to think a new way about cream poured into a clear glass of iced coffee, or of cigarette smoke rising under a lamp's light in a black and white movie, or of meandering rivers depositing silt at the meanders as they slow to make the turn, and then extending the meander farther out to get around the silt, deepening the channel on the outside bank, or even cuckoo clocks hung on the same wall synchronizing their pendulums.


But it was a neat theory, and he was in love with it. The only consolation he drew from the present chaos was that his theory managed to explain it. -- Thomas Pynchon, 1963 "V." (novel) ( www.amazon.com/exec/obidos/ASIN/0060930217/hip-20 ) I had a professor once who said "everything is linear if you break it down." I said, "Yeah, and everything's dead if you kill it." One of the most common methods used to study nonlinear systems is to try and "linearize" them somehow. You do this when you have to, I suppose, but it seems to me you will miss all the non-linear weirdness this way. It's like if you only studied lungfish when they were on land, you would conclude they acted a lot like land animals. You would report that they never swim or breathe water. The reason everybody wants to do this, of course, is because the nonlinear equations are UNSOLVABLE. But they cannot be denied. By hook or by crook we have to do something with them. From the 1960s to the 1980s there was a paradigm shift in mathematical physics that resulted in the emergence of CHAOS THEORY, when researchers finally began to notice and document all the bizarre behaviors that nonlinear systems could exhibit: STRANGE ATTRACTORS such as the Lorenz Butterfly, ( en.wikipedia.org/wiki/Lorenz_attractor ) Rossler Bands, ( hypertextbook.com/chaos/eyecandy/strange-rossler.html ) and others: ( sprott.physics.wisc.edu/fractals/cat00002.gif ) ( sprott.physics.wisc.edu/fractals.htm ) Of course later it was figured out that some of the best tools for dealing with these beasts were developed by Russians in the 1930s, such as the Lyapunov Exponent. ( en.wikipedia.org/wiki/Lyapunov_exponent ) Wikipedia also has this connection with eigenstuff: Whereas the (global) Lyapunov exponent gives a measure for the total predictability of a system, it is sometimes interesting to estimate the local predictability around a point x0 in phase space. This may be done through the eigenvalues of the Jacobian matrix J^0(x0). These eigenvalues are also called local Lyapunov exponents. These new (and newly-rediscovered) models began to link up with some of the old unexplained phenomena, such as John Scott Russell's 1834 discovery of the WAVE OF TRANSLATION (later named the SOLITON), first seen as a long-distance traveling wave on a canal, or rogue waves on the oceans long reported by sailors and pooh-poohed by scientists. I have previously written of the numerical experiments by Fermi, Pasta and Ulam in the 1950s ( www.osti.gov/accomplishments/pdf/A80037041/A80037041.pdf ) and Wolfram in the 1990s ( www.wolframscience.com ) investigating nonlinear systems through simulation. One driving force for the new science of COMPLEXITY being proclaimed has been a desperate need for new models in economics. John Reed was CEO of Citibank with about $300 billion in debt owed by Third World nations on his books that was looking pretty shaky. But his economists, using linear "equilibrium" based computer models, told him that default was impossible. You know what happened. In the aftermath Reed helped fund and found the Santa Fe Institute to research nonlinear economic models (among many other things). This is described in "Complexity: The Emerging Science at the Edge of Order and Chaos" (book, 1992) by Mitchell M. Waldrop. ( www.amazon.com/exec/obidos/ASIN/0671872346/hip-20 ) Any early approach to controlling chaotic systems was to drive them out of the chaotic region, into a near-linear mode, and then use linear techniques. This didn't always work so well. In his landmark paper "Controlling Cardiac Chaos" (1992) (Science, Vol 257, Issue 5074, 1230-1235), ( www.sciencemag.org/cgi/content/abstract/257/5074/1230 ) Dr. Alan Garfinkel and his colleagues showed how to control a chaotic heartbeat without leaving the chaotic region, merely by understanding it (and doing real-time digital computations to know how to regulate it). Recent work in the U.S. and Japan on non-linear N-body celestial mechanics has found energy-saving chaotic trajectories from the earth to the moon, for example.


1) Nature is more complex than we know, and probably more complex than we can know. 2) Everything has to go somewhere. 3) There is no such thing as a free lunch. 4) Nature knows best. -- Barry Commoner, 1971 "The Closing Circle" ( www.amazon.com/exec/obidos/ASIN/0553202464/hip-20 ) I suspect some of you may have voted for this topic because you thought it would be about ecology a la Barry Commoner's "Everything has to go somewhere." No such luck. Sorry I didn't mention it sooner. For me the phrase is a sort of shorthand for some lesson's I've learned about the geometry of systems theory. (After all, topology was invented to analyze nonlinear systems.) If you have a blob of something and you want to minimize its surface area, what shape must it form? A sphere of course. But what if you want to MAXIMIZE its surface area? There's really no right answer, without additional constraints. Some sort of fractal is required, certainly. The air cooling fins on a motorcycle engine are a step in the right direction. A tree filled with leaves seems to be solving a similar problem, or maximizing sun-catching surface for unit of resources. I think of this when I look at the cigarette smoke under the 1940s lampshade in a noir genre movie. If the smoke column were attempting to MINIMIZE its length it would be a straight line. But the smoke is very hot, and so is expanding relative to the cooler air around it. It needs to stretch, to "go somewhere," and so it forms the twists and gnarls that cinematographers love so much. There is one fact which seems to have an overriding influence on system trajectories, and that is that, BY DEFINITION, the STATE of the system (its current state vector in the state space) is ALL YOU NEED TO KNOW in order to predict its future behavior. If not, there is a HIDDEN VARIABLE and you don't have the complete state. But if you DO have the complete state specified, then if the system ever returns to a state it has been in before, then it is now in a so-called INFINITE LOOP and will surely LOOP FOREVER. So, think about this. A system at any instant will transition either to: a state it has never been in before, or a state it HAS been in before, and now it's in an infinite loop. So suppose you have your trusty lab computer observe some nonlinear systems for a while, and throw away any observations of loops -- oscillations and repetitions. Eventually you look at the data to find a whole zoo of ways to "not loop." The system keeps having to go someplace "new" in order to not loop. In a Darwinian fashion, chaos has been selected for. We have found that in two dimensions chaos is impossible; attractors can only be topological distortions of points, lines fleeing to infinity, circles, and spirals (in and out). But in three dimensions and higher the "strange" attractors keep finding "new places to go." I keep thinking their is further weirdness waiting for us in the higher dimensions.


Ask the Germans especially. Oh, it is a real sad story, how shoddily their Schwarmerei for Control was used by the folks in power ... "Paranoid Systems of History" ... has even suggested ... that the whole German Inflation was created deliberately, simply to drive young enthusiasts of the Cybernetic Tradition into Control work ... If any of the young engineers saw correspondence between the deep conservatism of Feedback and the kinds of lives they were coming to lead in the very process of embracing it, it got lost, or disguised -- none of them made the connection... -- Thomas Pynchon, 1973 "Gravity's Rainbow" (novel) ( www.amazon.com/exec/obidos/ASIN/0140188592/hip-20 ) An enterprising turkey gathered the flock together and, following instructions and demonstrations, taught them how to fly. All afternoon they enjoyed soaring and flying and the thrill of seeing new vistas. After the meeting, all of the turkeys walked home. -- Merlin R. Lybbert "Ensign" May 1990, p. 82 So, what's the point of these mental exercises? I would argue that it is to improve your intuition, gentle readers. I don't suppose that very many of you are going to become control engineers and try to figure out the harmonic modes of analog electrical circuits, or design mechanical oscillators using spring, dashpots and motors, so the specific knowledge of zero-pole plots and eigenvectors won't be of much to use to you. But all of us deal with complex systems, and a better "feel" for how these systems CAN behave would do us all some good. I remember in his book "Cybernetics" (1948) ( www.amazon.com/exec/obidos/ASIN/026273009X/hip-20 ) Norbert Wiener describes how a fire-control system which he designed during WWII keeps an anti-aircraft gun aimed at the point in the sky where an airplane is EXPECTED to be by the time the bullets reach it; all the operator has to do is keep the gun sight aimed at the aircraft's CURRENT position. If the operator doesn't move the sight for a while -- or wanders off -- the gun stays pointed at a fixed point. The fire control system "thinks" the plane has no relative sideways motion -- i.e. is approaching or receding -- and so aims straight at it. Wiener points out that if you slap your hand against the gun, it feels solid, like it is mounted in concrete. But that's just the fire control system responding to feedback quickly with an opposite force. If you switch off the power, the gun falls passively to the deck. From this I learned that whatever feels solid in fact has a feedback system behind it. Even if a pole IS mounted in concrete there is a feedback system behind it; a solid object anchored to the Earth has the whole planet to help it push back when pushed. Another thing I've learned from systems theory is the importance of understanding time-lag effects. Attempts to control systems can paradoxically drive them into instability. Recall trying to adjust the temperature of a shower with a long lag time between turning the faucet and feeling the temperature change. I remember when I was a freshman in college I wanted to be a high school math teacher, but at a meeting for education majors I was warned of an impending teacher "glut," and changed my plans. By the time I was senior there was a huge teacher shortage. The education department didn't understand time lag effects. Still another thing I've learned from systems theory is that what seems paradoxical when analyzed from a physics perspective can make sense when explained with systems theory. For example, various collections of "Murphy's Laws" often include the paradoxical "law" that: Any stone in your boot always migrates against the pressure gradient to exactly the point of most pressure. A systems theory explanation for this effect is that the point of highest pressure gradient pinches the tightest. During hiking, footsteps create an oscillating pressure field, in which stones can rattle around. This they do until they are pinched so tight that they can't move. The "point of most pressure" is therefore an ATTRACTOR. In his essay "Cybernetic Explanation" -- reprinted in "Steps To An Ecology of Mind" (book, 1972) -- ( www.amazon.com/exec/obidos/ASIN/0226039056/hip-20 ) Gregory Bateson says: Causal explanation is usually positive. We say that billiard ball B moved in such and such a direction because billiard ball A hit it at such and such an angle. In contrast to this, cybernetic explanation is always negative. We consider what alternate possibilities could conceivably have occurred and then ask why many of the alternatives were not followed, so that the particular event was one of the few which could, in fact occur... In cybernetic language, the course of events is said to be subject to restraints, and it is assumed that, apart from such restraints, the pathways of change would be governed only by equal probability. These are lessons that control engineers (sometimes) learn, and I would like more people to have available, especially those who make a large difference. We need more systems theory understanding among leaders in government (and their electorates), not just among the engineers who design the guidance systems for the weapons they command.


Remittance men from all over the world will come to Heidelberg before long, to major in guilt. ... Sorry -- not for Achtfaden here, shrugging ... -- he only worked with [the V2 rocket] up to the point where the air was too thin to make a difference. What it did after that was none of his responsibility. Ask ... the re-entry people. Ask the guidance section, they pointed it where it was going. . . . -- Thomas Pynchon, 1973 "Gravity's Rainbow" (novel) ( www.amazon.com/exec/obidos/ASIN/0140188592/hip-20 ) As we saw above, it seems creepily easy to open a control theory text, like Saucedo and Schiring, and encounter -- as if it were nothing special -- a sentence such as: It is desired to design a missile launcher control system to seek and follow a hostile target in a prescribed manner. Often there is an accompanying diagram of the missile launcher blowing stuff up "in a prescribed manner." ( www.well.com/~abs/Cyb/4.669211660910299067185320382047/ICDCS0001.jpg ) So, you want me to help improve the efficiency of a machine for bringing agonizing death to humans? Without a thought as to the moral implications? Oh, we're just going to use it as an example just to learn something about Truxal's Method (or whatever), so LATER we can improve actual missile launchers? Sooner or later every cyberneticist runs into the war work issue. My story is far from unique. I remember in the late 1970s the two scariest new phrases I learned were: "acquired immune deficiency syndrome," and "cruise missile." I think the scariness in the concept of AIDS is pretty obvious. Almost as obvious is the danger of an undetectable ("stealth") delivery system for nuclear weapons. (ICBMs, in contrast, are detectable but mostly unstoppable.) There is the danger of the technology "falling into the wrong hands." There is also the danger of power corrupting -- politicians tend to use the weapons they are given. At one point in my career I attempted to refuse to work on weapons projects. I took a job in 1979 reluctantly with a "computer output to microfilm " (COM) company called Datagrafix, knowing the parent company, General Dynamics (isn't that a great name?) made Tomahawk Cruise Missiles. ( en.wikipedia.org/wiki/BGM-109_Tomahawk ) I thought I'd be insulated from that side of the business. But during my first week on the job, I was sent to the General Dynamics facility in the Kearny Mesa neighborhood of San Diego for a training class, the same facility where the Atlas ballistic missiles were built in the 1950s and '60s, ( www.globalsecurity.org/space/facility/atlas_f.htm) and there on the wall of the classroom was a diagram of the Tomahawk. ( www.ausairpower.net/Tomahawk-Subtypes.html ) Arg! I stayed for six months before fleeing. (The company had other dysfunctions besides making robot kamikazes.) A few years later I found myself at a company making a pioneer 3D graphics system, the Poly 2000 by GTI Corp. "The applications were endless" we heard during the R&D phase, but when we had a product ready to sell we found out our only viable market was military simulators. Arg! I ended up fleeing that company as well, for several reasons, and I ended up getting job offers from two customers who'd bought the 3D graphics systems: Hughes Aircraft Company in El Segundo, researching combat helicopter design, and Rockwell International in Downey, chasing the primary NASA contract for building the space station Reagan had announced in 1983. I chose the civilian space project over the weapons project. But then at Rockwell the civilian business got slow and I was assigned to do 3D graphics for Reagan's "Star Wars" program, the Strategic Defense Initiative (SDI), which he also also announced in 1983. This crash program was recommended by good old Dr. Edward Teller, "Father of the H-Bomb." (George Gamov teased him with a thought experiment in which he shook hands with "Dr. Anti-Teller" and was annihilated in a burst of gamma rays.) ( en.wikipedia.org/wiki/Edward_Teller ) The goal of SDI was to stop those Soviet ICBMs -- moving at supersonic speeds and armed with Multiple Independently-targetable Re-entry Vehicles (MIRVs) with H-bomb warheads -- in mid-flight. ( en.wikipedia.org/wiki/MIRV ) My conscience was troubled. I kept working, but was plagued by a series of bicycle accidents culminating in the one that totaled my bike and put me in the hospital. My unconscious mind was trying to tell me something. My life was incongruent. My values didn't match my actions. I did some soul searching, and I realized that if I thought weapons work was wrong, refusing to participate wasn't good enough. They were taking money out of my paycheck every month to build this stuff no matter where I worked. No personal boycott was going to make a whit of difference. I made some decisions: 1) I was being too hard on myself. I have a right to support my family. I needed to forgive myself for war work. 2) If I wanted to make a difference, I needed to WORK WITHIN THE SYSTEM to make a difference. 3) I still didn't LIKE to do war work, and would avoid it when I could. My next career move was to a sequence of companies that sold tools for supercomputing and scientific visualization. They seemed like generic scientific tools, but somehow our best customers were designing stealth aircraft or H-bomb detonators at places like the Lockheed Skunkworks ( en.wikipedia.org/wiki/Lockheed_Skunkworks ) or Sandia National Labs. ( en.wikipedia.org/wiki/Sandia_National_Laboratories ) But I was patient and my opportunity to make a difference came. I helped the company land some large commercial accounts in medical imaging and integrated circuit design, and prepared for the day in 1989 when the Berlin Wall fell and all the aerospace engineers said "Oh s###!, now what do we do?" It was one of the time in my life I felt the centrifugal force from being close the the "hinge of history." My company was able to survive in the post-cold war and post-Soviet Union world, partly due to my efforts, and all over America there were similar "sword into plowshare" moves that I believe ending up giving our economy an uplift. But before that I took my wife on vacation in New Mexico, and she asked to visit the National Atomic Museum at Sandia Labs. ( www.atomicmuseum.com ) and we saw a craft that was labeled "America's first cruise missile." I'm pretty sure it was a North American X-10. ( www.fas.org/nuke/guide/usa/icbm/n19980710_981014.html ) As the Boeing web site ( www.boeing.com/phantom/xplanesdt.html ) describes it: As part of the Navaho missile program, the X-10 test drone was essentially the first cruise missile. The excessive weight of nuclear warheads at the time initiated the investigation of this unusual delivery system. From its first flight on October 14, 1953, the X-10 was controlled by either a pilot on the ground or from the backseat of an ET-33 chase plane. The X-10 program fostered a multitude of contributions into follow-on systems. Staring it at it, I said, "Of course, the real 'first cruise missile' was the buzz bomb -- the V-1." ( en.wikipedia.org/wiki/V-1_flying_bomb ) Wikipedia summarizes: The Vergeltungswaffe-1, V-1, ... known colloquially in English as the Flying bomb, Buzz bomb or Doodlebug, was the first guided missile used in war and the forerunner of today's cruise missile. Of course it was launched by the Germans against the English. It was really just a little robot airplane with a bomb aboard. I heard that they'd buzz as the flew, and when they ran of fuel they'd STOP buzzing, and that's when you'd dive for cover, because they'd fall to the ground and explode. I saw that the development of guided weapons was gradual and probably unstoppable. "Cruise Missile" was a new marketing term for something in the works since archery. I realized I'd made my peace with these war machines. When the Tomahawk was used in Gulf Wars I, II and III to save American lives, but only strategic and sparingly because they are REALLY EXPENSIVE (after the radars are knocked out you send in planes) I actually bought a Tomahawk T-shirt at a local air show. I still hate war, but I don't oppose the soldiers or their gadgets, just the political use of war as a proactive tool. I apply my pressure directly through my vote and political contributions and actives, mostly involving the Libertarian Party. ( en.wikipedia.org/wiki/Libertarian_Party_(United_States) ) But others have not taken this trajectory. The founder and coiner of cybernetics, Norbert Wiener, after doing war work for World War II, wrestled with his conscience in the 1950s at MIT and ended up rejecting all government funding, in contrast to most of his colleagues, who were cooking all sorts of ways to get Army funding. The Lincoln Lab was born in this climate. This story is told in the recent biography "Dark Hero Of The Information Age: In Search of Norbert Wiener The Father of Cybernetics" (2004) by Flo Conway and Jim Siegelman. ( www.amazon.com/exec/obidos/ASIN/0738203688/hip-20 ) Bucky Fuller loved the Navy and the education he got there, but he hated war. He wrote against building weaponry with technology, encouraging "livingry" instead, such as cheap mass-produced shelter. He pointed out that our airplane designs evolved fastest in a real "shooting war" and complained that we weren't enlightened enough to somehow MOTIVATE OURSELVES to be as creative and resourceful as we can be WITHOUT bullets zipping past us. Recently I read a well-researched historical work, "What the Dormouse Said: How the 60s Counterculture Shaped the Personal Computer Industry" (book, 2005) by John Markoff, ( www.amazon.com/exec/obidos/ASIN/0143036769/hip-20 ) and I was reminded that the PC revolution was fought, especially in the early days, by people with very specific POLITICAL motivations. Groups like the People's Computer Company, the Community Bulletin Board and the Homebrew Computer Club, all on the San Francisco Peninsula in the 1970s, were coalitions of people who refused to war work and people who did it eagerly to get the funding for their radical ideas. And the all wanted to FEED YOUR HEAD, man. The goals was intelligence amplification, ability augmentation and individual empowerment. As much as it sounds like a 60s cop-out, I have found that "working to change the system from within" is a fruitful approach. The work I have done post 9/11 for my friend Dr. Dave Warner and Mindtel ( www.mindtel.com ) ( www.well.com/~abs/HIP/Mindtel/VPHT2.html ) was based on a shared vision of repurposing the military for humanitarian and public health work.


We cannot expect to make everyone our friend, but we can try to make no one our enemy. Those who would be our adversaries, we invite to a peaceful competition -- not in conquering territory or extending dominion, but in enriching the life of man. ... Let us build a structure of peace in the world -- 1st inagural address of Richard Nixon ( www.yale.edu/lawweb/avalon/presiden/inaug/nixon1.htm ) quoted in a display at the Nixon Library ( www.well.com/~abs/Cyb/4.669211660910299067185320382047/nixon_inaugural.jpg ) ( en.wikipedia.org/wiki/Nixon_Library ) (I was there to see the Walt Disney backyard train) ( en.wikipedia.org/wiki/Carolwood_Pacific_Railroad ) In 1976 Seymour Melman wrote a book I never did read, "The Permanent War Economy: American Capitalism in Decline," ( www.amazon.com/exec/obidos/ASIN/0671222619/hip-20 ) But the title stayed with me. I realized that was what we are in danger of becoming once again. We went from a Cold War we couldn't fight to a War on Terror we can't win. And I like to recall President Dwight D. Eisenhower's "cross of iron" speech, as he left office in 1960, ( www.eisenhower.utexas.edu/chance.htm ) in which he warned against undue political influence by the military-industrial complex. The ex-general, who'd won Europe for us in WWII, also said: Every gun that is made, every warship launched, every rocket fired signifies, in the final sense, a theft from those who hunger and are not fed, those who are cold and are not clothed. This world in arms is not spending money alone. It is spending the sweat of its laborers, the genius of its scientists, the hopes of its children. The cost of one modern heavy bomber is this: a modern brick school in more than 30 cities. It is two electric power plants, each serving a town of 60,000 population. It is two fine, fully equipped hospitals. It is some 50 miles of concrete highway. We pay for a single fighter with a half million bushels of wheat. We pay for a single destroyer with new homes that could have housed more than 8,000 people. This ... is the best way of life to be found on the road the world has been taking. This is not a way of life at all, in any true sense. Under the cloud of threatening war, it is humanity hanging from a cross of iron. We have changed course several times since Eisenhower spoke those words. We have turned the hinge of history. We can do it again. ======================================================================== newsletter archives: www.well.com/~abs/Cyb/4.669211660910299067185320382047 ======================================================================== Privacy Promise: Your email address will never be sold or given to others. You will receive only the e-Zine C3M from me, Alan Scrivener, at most once per month. It may contain commercial offers from me. To cancel the e-Zine send the subject line "unsubscribe" to me. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ I receive a commission on everything you purchase from Amazon.com after following one of my links, which helps to support my research. ======================================================================== Copyright 2007 by Alan B. Scrivener