=======================================================================
Cybernetics in the 3rd Millennium (C3M) -- Volume 2 Number 3, Mar. 2003
Alan B. Scrivener --- www.well.com/~abs --- mailto:abs@well.com
=======================================================================
Biological Computing: The Next Big Thing?
(Part Two)
(If you missed Part One see the C3M e-Zine archives -- the link is at
the end of this article.)
On December 5, 2002 at about noon I walked into the building now
called the Skaggs Institute, which used to be the called Molecular
Biology building, at the Scripps Research Institute (TSRI). TSRI is
the largest private research institute in the United States, and has
been the source of a number of breakthroughs in the life sciences. I
have been visiting Scripps since 1988 when I worked for Stellar
Computer, a now-defunct maker of "graphics supercomputers," and they
were one of our first customers. The Skaggs building has a large
atrium which showcases a glass-enclosed computer room right in the
middle of the building, flanked by 3D molecule models and renderings.
Every time I've been there some kind of state-of-the-art supercomputer
has been sitting in that enclosure, bathed in track lights, looking
very high-tech and expensive. I'm told that vendors compete to see
whose hardware ends up on display there.
TSRI might be more famous if it weren't for an accident of fate. In
early 1993 some of their researchers were part of a group of 31
scientists in 8 American institutes who found a possible genetic basis
for the disease known as Amyotrophic Lateral Sclerosis (ALS), more
commonly called Lou Gehrig's disease after the famous baseball player
it killed. It usually strikes people in their 40s or 50s, causing
degeneration of neurons in the brain and spinal cord that control
muscles. About 10% of cases are inherited, and these scientists found
that a gene called SOD1 was mutated in a fifth of those cases. This
was a big deal at the time because it was one of the first hard cases
of a disease being tied to a specific DNA sequence. The
group notified the press, and the discovery was due to go into
"heavy rotation" on CNN Headline News, but on the same day a little
girl fell down a well, and that became the story that fit into the
cable channel's human interest & medical news time slot near the end
of their 30 minute format. The Lou Gehrig's disease story was bumped,
and the next day it wasn't "news" any more.
(Reference: Deng, H.-X., Hentati, A., Tainer, J., Iqbal, Z.,
Cayabyab, A., Hung, W.-Y., Getzoff, E. D., Hu, P., Herzfeldt, B.,
Roos, R. P., Warner, C., Deng, G., Soriano, E., Smyth, C.,
Parge, H. E., Ahmed, A., Roses, A. D., Hallewell, R. E.,
Pericak-Vance, M. A., and Siddique, T. (1993) Science 261,
10471051, "Amyotrophic lateral sclerosis and structural defects
in CuZn superoxide dismutase.")
This is sort of a cultural parable. The little girl down the well
story was immediate and emotional, and so it was selected by our
news reporting system, but the DNA story was very much more profound
in its ultimate effect on humanity.
Now I was walking in to a the place where part of that discovery was
made, in offices and labs on both sides off the main atrium. The
reason I was here was that I wanted a reality check. I had been
struggling to integrate biology into my pantheon of "whole systems"
thinking for decades, and most recently had been following the advice
of Art Olson at TSRI, in early 2001, to explore "biological
computing." I wanted to know if I was getting it right. So far I'd
dredged up something "old" and something "new," Artificial Life
(A-Life) and Support Vector machines (SVM). I went back to reading
Kevin Kelly's 1994 book (which I'd never finished), "Out of Control:
The New Biology of Machines, Social Systems and the Economic World."
( www.amazon.com/exec/obidos/ASIN/0201483408/hip-20 )
It was reminding me how -- in 1994 -- computer scientists were
simulating biological systems and producing remarkable results,
including Darwinian evolution, that biologists seemed uninterested in.
I'd also, in 2001, screenwritten and produced a video to explain the
technology of a start-up company (now defunct) that was using the new
mathematical method of Support Vector Machines (SVM) to aid in cancer
diagnosis. (You can see the whole screenplay at:
www.well.com/~abs/Biowulf/biowulf_screenplay.html
Copies of the video are available at cost by request.)
Here is an excerpt from the screenplay:
We were fortunate to receive 62 frozen tissue samples from
a hospital that were biopsies of patients with suspected
colon cancer. Along with the samples was a file on each
patient covering 30 years of follow-up testing, so that it
was possible for each sample to know the "right answer" in
advance -- whether the patient got colon cancer or not.
With modern genetic analysis techniques we were able to
subject each sample to 2000 gene probes, to determine for
each of 2000 genes whether that gene was silent, active or
hyperactive in the sample. In each case this was turned
into a number from zero to one.
...the goal is
to produce a formula -- what we call the Decision Function
-- to sort patients correctly into those who will and won't
get the cancer, using only the information on the list.
...
In the paper "Gene Selection for Colon Classification
using Support Vector Machines" by Isabelle Guyon, Jason
Weston, Stephen Barnhill, and Vladimir Vapnik, the story
is told of these 62 biopsies samples...
They report that SVM is better than the next best known
technique with 95.8% confidence. Furthermore, the SVM
used 8 instead of 32 genes to make the classification,
after recursive feature elimination, and still had a
lower error rate.
Remember that this was from frozen tissue samples that
were combined with 30 years of follow-up patient data.
This stuff can save lives!
It was also interesting to find that other methods
falsely associated the presence of skin cell genes
with cancerous cells, just because cancerous tissue
samples tended to have more skin cells while healthy
samples had more muscle cells. The SVM method did not
make this false correlation. Also, the SVM method
did correlate a lack of cancer with the presence
of genes from a tiny parasite, a protozoa named
Trypanosoma, indigenous to Africa and South America.
To quote the paper: "We thought this may be an anomaly
of our gene selection method or an error in the data,
until we discovered that clinical studies report that
patients infected by Trypanosoma -- a colon parasite
-- develop resistance against colon cancer."
So what I was documenting was the use of advanced computational
tools based on up-to-the minute math to re-examine historical
diagnosis data and use it to do new diagnosis automatically. It
was being done by mathematicians, not biologists. I wanted to know
if this kind of approach was a trend in bio-tech.
As a self-proclaimed generalist I tend to be exposed to the writing
of a lot of other self-proclaimed generalists. I find it vital to
get "reality checks" from actual practitioners in the fields I'm
reading about. "Out of Control" author Kevin Kelly and "Artificial
Life" author Steven Levy are just such generalists. So here I was
now sitting down in the office of Arthur J. Olson, Ph.D., Professor
of Molecular Biology and Director of Molecular Graphics Laboratory
at the Scripps Research Institute.
( www.scripps.edu/pub/olson-web/people/olson/home.html )
Art is an actual practitioner of information science in biology. This
was the same office I'd met him in 1988, with its view of Torrey
Pines beach in La Jolla, near the hang-glider port and the Torrey
Pines golf course.
Art subscribes to this e-Zine and the first thing he wanted to talk
about was Wolfram's book "A New Kind of Science," and what I thought
of it. I gave him an abridged version of my views that ended up in
the e-Zine from Jan. 2003: "Why I Think Wolfram Is Right," which I
was working on at the time.
( www.well.com/~abs/Cyb/4.669211660910299067185320382047/c3m_0201.txt )
He challenged me on some of my conclusions. I was expressing an
unbridled optimism that performing computational experiments like
the cellular automata explorations in Wolfram's book would lead
directly to new understandings of things like, say, how many ways
DNA can encode protein structures that are viable. Art was
skeptical. He thought it would probably lead to nothing a
biologist could use.
This led to Art making the assertion that, given the basic
assumptions of physics, you cannot derive the chemistry we have
in this universe, and that likewise, given the assumptions of
chemistry, you can't derive the biology we have on this planet.
This caught me by surprise. I'd been lead to believe that it all
can be derived from quantum mechanics. The field of "ab initio"
chemistry grew up with supercomputing, and in my Stellar days I saw
some work done by people like Aron Kuppermann at Caltech,
( chemistry.caltech.edu/faculty/kuppermann/index.html )
and the folks at Pacific Northwest Laboratory's Molecular Science
Computing Facility,
( www.emsl.pnl.gov:2080/capabs/mscf/index.html )
and it seemed firmly based on the idea that you started with a
supercomputer and some first principles like Quantum Electro-Dynamics
(QED), and before you knew it you were predicting the shape of the
artificial banana flavor molecule, and later, who knows, maybe the
entire text of "The Alchemist" by Ben Jonson.
( www.amazon.com/exec/obidos/ASIN/0192834460/hip-20 )
Even "The Quantum Mechanical Universe" videos from Caltech with
computer graphics by Jim Blinn showed the derivation of the orbitals
of atoms and, ultimately, the periodic table from quantum principles.
Didn't it?
( www.themechanicaluniverse.com/mu151.htm?13,7 )
But the way Art explained it, the missing piece of the puzzle in each
case was natural history. (Where had I heard that before?) With
physics and the natural history of the universe you might get
chemistry. With chemistry and the natural history of life on earth
you might get biology.
This is not to say that physics is useless to the chemists or that
chemistry is useless to the biologists, Art went on. You can use
physics to build models of how molecules will behave, and use
chemistry to build models of how life will behave, and they work.
You just can't derive each science from the first principles of
the previous one.
In the problem of deriving biology from chemistry Art's view seemed
more correct. I realized it was sort of like on the "Battle Bots"
television show: robot A could beat robot B, which could beat robot C,
which could beat robot A (non-transitive dominance). Which robot
triumphed depended on the order of their combat, which was the
"natural history" of "Battle Bots." Evolution works the same way.
But what about the impossibility of getting chemistry from physics?
I did some web research and concluded this isn't just Art's opinion;
there is a growing consensus. See "The New Philosophy of Chemistry and
Its Relevance to Chemical Education" by Eric R. Scerri, UCLA Department
of Chemistry and Biochemistry, 19 February 2001:
( www.uoi.gr/cerp/2001_May/11.html )
In the past the reduction of one science to another one was
examined, in philosophy of science, by asking whether the
axiomatized laws of, say chemistry, could be strictly derived
from those of the reducing science, physics (Nagel, 1961).
However nobody has ever succeeded in axiomatizing the laws of
these two sciences, let alone showing that the necessary
derivation between the two formalized sets of laws exists.
It is not even very clear whether chemistry has any laws to
speak of apart from a few such as maybe the Periodic Law
(Scerri, 1997a, 1998a). These failures have not deterred
philosophers from assuming until quite recently that chemistry
does indeed reduce to physics.
But actually the periodic law contrary to popular textbook
presentations cannot be fully reduced to quantum mechanics.
Although quantum mechanics and the Pauli Exclusion Principle
can be used to explain the number of electrons which each shell
around the atomic nucleus can accommodate, it has not yet been
possible to predict the precise order of electron shell filling
from first principles (Scerri, 1997 b). The fact that the 4s
orbital begins to fill before 3d is an empirical fact, observed
in spectroscopic experiments, which can be accommodated into the
theory but not strictly derived (Scerri, 1994; Melrose & Scerri,
1996).
In addition, the move away from positivism in philosophy has
opened the path for a more naturalistic form of philosophy of
science which examines what scientists themselves might mean
by attempts at reduction. To chemists and physicists the
attempt to reduce chemistry is to be found in quantum chemistry,
which began to develop with the work of Heitler and London
immediately following the birth of quantum mechanics. With
the advent of high powered computation there has been an
increasing reliability in the predictions made in this field.
Any up-to-date examination of the reduction or otherwise of
chemistry must therefore examine the methodology, successes
and limitations of ab initio quantum chemistry.
Still on the issue of reduction, the question of whether
biology reduces to chemistry is one which is increasingly
being answered affirmatively due to the success of molecular
biology in accounting for the nature and mechanisms of heredity.
But even within this rather limited part of biology serious
problems remain to be solved if we are to say that reduction
has been successful. It seems as if we now comprehend the
chemical basis for the transcription and translation of DNA.
The fact remains that in carrying out these processes DNA
molecules must use a host of proteins which are themselves
generated by the DNA. Where do the first proteins come from
to allow the cycle to begin? Although some steps have been
taken towards answering this and similar questions, the
reduction of biology to chemistry is clearly in need of
reassessment (Rosenberg, 1994).
I've given this a lot of thought, and I'm still getting over it. I'm
coming to realize that a lot of the conclusions one can get from
quantum physics are built into the premises. Oh well.
Finally we moved on from the discussion inspired by Wolfram,
and I was able to get the question I'd come there to ask:
Just what had he meant in 2001 when he spoke of "biological computing"
being the next big thing?
Well, first there are activities in biology which require computing.
I thought of Trega, a company where my wife worked as a temp for a
while (which was bought by Lion in 2001). They make products to
replace some phases of clinical drug trials with computations. Product
information on their web site
( www.trega.com/ )
says:
Current tools to evaluate compound pharmacokinetics are too slow,
inefficient, unpredictive, and cost ineffective to estimate
bioavailability at the lead optimization stage.
To help overcome these hurdles, LION bioscience introduces the
Absorption Module for the iDEA(tm) (In Vitro Determination for the
Estimation of ADME) Simulation System.
iDEA is a computational model developed to predict human oral
drug absorption from a compound's solubility and permeability.
Another example of compute-intensive biology would be the
simulation of how two proteins interact, which is the kind
of work done a lot at Scripps.
For example, in the "Supercomputing Institute Research Bulletin"
issue dated Summer 1997
( www.msi.umn.edu/general/Bulletin/Vol.13-No.4/Rational.html )
we read:
On April 7-11, 1997, the University of Minnesota's Institute for
Mathematics and its Applications and the Supercomputing Institute
sponsored a Workshop on Mathematical and Computational Issues in
Rational Drug Design...
Mike Pique of the Scripps Research Institute in La Jolla, Calif.
discussed a convolution algorithm for the rapid computation of
the electrostatic potential energy between two proteins when their
relative orientation and separation must be optimized. Excellent
results were obtained on a 256-node Intel Paragon parallel
computer. He also discussed the future of dataflow visualization
environments in which scientific visualization is merging with
three technical innovations: visual object-oriented programming
(VOOP) with graphical editing tools; the unification of computer
imaging (image-to-image or image-to-data), including texture
mapping, spatial filtering, and computer graphics (data-to-image);
and the Internet. The new features introduced by the Internet, he
said, are an emphasis on communication, distributed resources, and
computer-architecture neutrality. Pique stressed the advances of
Java, which allows pharmaceutical chemists to e-mail 3-D images.
Pique used the metaphor "fog of excitement" to describe the
flourish of activity in this field, but he believes that taking
advantage of advances in digital technology by combining the
exploding resources of the Internet and desktop 3-D graphics and
imaging will provide a constructive advance.
Next, there are increasingly situations in biology where there
are unmanageably large amounts of data, for example in genome
sequencing (more on that later) or from the new microarray
instruments which can deliver a "fire hose" of genetic data.
The "DNA microarray (genome chip) web site" has a lot of information
about this family of breakthroughs.
( www.gene-chips.com/ )
This simple... site has been created and maintained by Leming
Shi, Ph.D. You'll find the basics on DNA microarray technology
and a list of academic and industrial links related to this
exciting new technology.
Some operations combine high computing with large amounts of data;
for example, in X-ray crystallography compute-intensive inverse
Fast Fourier Transforms (FFTs) must be applied to X-ray diffraction
data to reconstruct the crystalline source of the diffraction, and
Computer Aided Tomography (CAT) scans similarly do large amounts of
computing on large amounts of data to reconstruct images from
multiple X-ray slices.
"I guess what I'm saying," Art went on, "is that biological computing
isn't the next big thing, it's here." He suggested that the best
example of this might be the story of Craig Venter, currently Chief
Scientific Officer of Celera, the company he founded to sequence the
human genome and market the results. He created the compute-intensive
"shotgun" approach to assembling the genome and was able to achieve
results much faster in his private company than the combined efforts of
the US and UK governments, who -- as near as I can tell -- only won the
race because they moved the finish line. But then, faced with
competition from the government-funded labs that were giving away their
data, Celera struggled to find a business model. Subscription
services? Partnering with drug companies? Possibly becoming a drug
company itself?
More information about this fascinating story can be found at the
following web sites:
Frequently ask questions about the Human genome (excellent!).
( www.genoscope.cns.fr/externe/English/Questions/FAQ.html )
A wonderful timeline on the human genome from Science News.
( www.sciencemag.org/cgi/content/full/291/5507/1195 )
Science's News staff tells the history of the quest to sequence
the human genome, from Watson and Crick's discovery of the double
helical structure of DNA to today's publication of the draft sequence.
A tutorial from the laboratory of Dr. Michal Zurovec, Department of
Physiology, Institute of Entomology, Academy of Sciences & Faculty
of Biological Sciences, University of South Bohemia, Ceské Budejovice,
Czech Republic.
( www.entu.cas.cz/fyziol/seminars/genome/genome.html )
Notes from the lecture "Whole Genome Shotgun" by Michael F. Kim
and Rich May of Stanford.
( www.stanford.edu/class/cs262/Notes/ln8.pdf )
Whole Genome Shotgun (WGS) is a successful method for sequencing
genomes. This lecture will discuss how WGS works, some of the
challenges involved, and solutions to these challenges...
An article from "Wired" magazine.
( www.wired.com/news/technology/0,1282,35494,00.html )
After Celera announced Thursday morning that it had finished
the sequencing phase of one person's genetic code, its shares
(CRA) skyrocketed, catching a host of other biotech stocks in
its upstream.
Celera's own web site.
( www.celera.com/ )
A UPI news story on the programmer who coded the assembly program for
the shotgun technique.
( www.medserv.no/modules.php?name=News&file=article&sid=820 )
SANTA CRUZ, Calif. Aug. 13 (UPI) -- Without the efforts of a
graduate student, the human genome map may still not be assembled.
James Kent, a student at the University of California, Santa Cruz,
wrote a computer program to assemble the massive collection of DNA
pieces that created the first public draft of the human genome, the
final step necessary for the Human Genome Project to declare
completion on June 26, 2000.
"The pieces of the map came from the International Human Genome
Sequencing Program as well as many other labs," Kent told United
Press International. "We used the computer program to assemble
them into a nearly complete whole." Kent and colleague David
Haussler describe the creation of that computer program as a
"surprisingly simple" approach to "the most important
puzzle-solving exercise in recent history" in the August
issue of the journal Genome Research. The computer program,
called GigAssembler, had to trim and assemble the nearly
400,000 pieces of human DNA generated by the Human Genome
Project over a decade.
The UCSC Genome Bioinformatics Site
( genome.ucsc.edu/ )
This site contains working drafts for the human, mouse, and
rat genomes.
(One of the things I kept running across in my research on the Human
Genome Project was mention of the technique known as Hidden Markov
Models, or HMM. I have no idea what this is. I suppose it has
something to do with Markov Machines, which I know all about thanks
to my Information Science education. I also remember the Support Vector
Machine folks contrasting their methods with HMM. Then I ran across
an article by Robert Edgar called "Simultaneous Sequence Alignment
and Tree Construction Using Hidden Markov Models." Could this be
the same Bob Edgar I studied under at UCSC? I emailed him, at it
turns out he's a different Bob Edgar, but he was nice enough to
point me at the book "Biological Sequence Analysis: Probabilistic
Models of Proteins and Nucleic Acids" by Richard Durbin, et al,
as a good introduction to HMMs. More on this as I figure it out.
www.amazon.com/exec/obidos/ASIN/0521629713/hip-20 )
The next question I had for Art was, if a young person was starting
their college education today and wanted to get into biological
computing, what course of study would you recommend?
He suggested a curriculum that would include chemistry, biology and
information science. (This sounds a lot like the UCLA Cybernetics
major!)
I suppose this might have been a logical place to conclude our
interview, but I think Art sensed I was disappointed that our
conversation hadn't covered Artificial Life (A-Life), genetic
algorithms and similar topics I'd picked up from Kelly and Levy
in my reading, so he filled me in on a few more things.
I'd had big hopes for A-Life when I first read about it. I thought
it would make great theoretical contributions to evolutionary
theory. Art downplayed the theoretical contributions of A-Life, but
he said biologists are now familiar with it and it has practical
applications. He said it is now not uncommon to use genetic
algorithms to solve biological problems. Not only that, but
computational biologists have played with the parameters; for
example, in some cases they introduce the passing on of inherited
traits that Lamarck thought occurred in natural evolution. There
is still no evidence of Lamarckian evolution in nature, but it
works in computers.
He told me about some other things I hadn't known about.
There have been a number of breakthroughs in the other kind
of biological computing: building computers out of biological
components. In the mid-1990s Dr. Adleman at the University of
Southern California (USC) built a "DNA computer," and quite recently
has refined it. USC news has released a lay person's account.
( www.usc.edu/dept/molecular-science/uscnews.htm )
Engineering: Using 'nature's toolbox,' a DNA computer solves a
complex problem
by Matthew Blakeslee
A DNA-based computer has solved a logic problem that no person
could complete by hand, setting a new milestone for this infant
technology that could someday surpass the electronic digital
computer in certain areas.
The results were published in the online version of the journal
Science on March 14 and will then run in the print edition.
The new experiment was carried out by USC computer science
professor Leonard Adleman, who made headlines in 1994 by
demonstrating that DNA -- the spiraling molecule that holds
life's genetic code -- could be used to carry out computations.
The idea was to use a strand of DNA to represent a math or logic
problem, then generate trillions of other unique DNA strands,
each representing one possible solution. Exploiting the way DNA
strands bind to each other, the computer can weed out invalid
solutions until it is left with only the strand that solves the
problem exactly.
In the fall of 2001 Beneson and others published their results
on building a Turing machine automaton out of nucleic acids.
( www.nature.com/cgi-taf/DynaPage.taf?file=/nature/journal/v414/n6862/abs/414430a0_r.html )
Nature 22 Nov 2001 (subscription required)
Nature 414, 430 - 434 (2001); doi:10.1038/35106533
Programmable and autonomous computing machine made of biomolecules
Yaakov Benenson, Tamar Paz-Elizur, Rivka Adar, Ehud Keinan, Zvi
Livneh & Ehud Shapiro
Here we describe a programmable finite automaton comprising DNA
and DNA-manipulating enzymes that solves computational problems
autonomously
Recently Dr. John S. Kauer, Professor of Neuroscience and Anatomy
at the Department of Neuroscience, Tufts University School of Medicine
has announced the creation of an "artificial nose."
( www.neurosci.tufts.edu/Kauer/ )
Another promising development is what is called "microfluidic
large-scale integration," which amounts to a miniaturized, automatic,
computer-controlled biology lab that can do all kinds of amazing
things. See Science Magazine Oct. 18, 2002. (Subscription required.)
( www.sciencemag.org/cgi/doi/10.1126/science.1076996 )
A lay person's summary is at "Nature" magazine's web site.
( www.nature.com/nsu/020923/020923-10.html )
Researchers at the California Institute of Technology have built
a chip that shunts liquids, just as silicon microchips shunt
current. It is a significant step towards the long-held hope of
a lab-on-a-chip replacing the automated instruments that currently
carry out chemical analysis, such as drug screening or
environmental testing.
The chip, developed by Stephen Quake and co-workers, is the first
integrated microfluidic circuit comparable to an electronic
integrated circuit. It contains more than 3,500 transistor-like
switches. The ultimate complexity and application of this
technology is "limited only by one's imagination", they say.
(It seems like most of the breakthroughs Art was telling me about
were reported in "Science" or "Nature" and I resolved to read both
magazines more often.)
Art mused that one of the biggest problems facing bio-tech is finding
a business model that makes Wall Street happy. Wall Street wants
dramatic results like an AIDS cure, a cancer cure, or another Viagra,
something that will guarantee continued economic growth for a long
time. If you make a breakthrough that can help people, but will
result in a constant (not exponentially growing) stream of income,
Wall Street doesn't want to hear about it. The pressure is to
"grow-grow-grow."
Again I found myself remembering Bateson's assertion that in any
biological system, no variable can grow with an unlimited exponential
curve -- something will "crash" eventually.
The other big problem Art mentioned that bio-tech faces is long-term
ethical concerns. Oh sure, we've all heard the flap about cloning.
But what about curing cancer? Anything wrong with that? How about
curing heart disease? These sound like laudable goals. But what if
the sum total effect of all these disease cures is a dramatic
increase of average human lifespan, say to 200 years? I mentioned
a science fiction novel I read recently, "Holy Fire" by Bruce Sterling,
which describes a world in which 90% of humans are over 100, and
catalogs a litany of potentially negative social effects from this.
( www.amazon.com/exec/obidos/ASIN/055357549X/hip-20 )
My time was up. We'd planned to go to lunch, but the conversation
got so engrossing in Art's office that we forgot to go. I said
goodbye and walked out through the beautiful atrium past the latest
supercomputers on display. I'd been given a lot to think about.
I was reminded of a quote from Gregory Bateson's 1966 speech, "From
Versailles to Cybernetics" (reprinted in "Steps To an Ecology of
Mind"):
I think that cybernetics is the biggest bite out of the fruit of
the Tree of Knowledge that mankind has taken in the last 2000
years. But most such bites out of the apple have proven to be
rather indigestible -- usually for cybernetic reasons.
( www.amazon.com/exec/obidos/ASIN/0226039056/hip-20 )
It looked like bio-tech was an even bigger bite out of the fruit of
the Tree of Knowledge. I had to think for a while about this, which
is why this column appears in the March 2003 issue of the e-Zine and
not the December 2002 issue.
One of the conclusions I reached was: what a difference a decade
makes. In 1993 biologists were suspicious of computer and information
scientists, now in 2003 they employ them and their techniques
routinely. A-Life is no longer an oddity, it's just another tool in
the chest. And biologists are making strides that may feed back into
computer technology. It may be the Information Scientists turn to
resist a new paradigm and finally cave in after a decade.
I wanted a longer view. I remember reading that esteemed biologist
Stephen Jay Gould passed away last year. I don't know that he
ever touched a computer in his life, but he certainly pumped a lot of
knowledge of natural history through his brain during his lifetime.
I tracked down one of the last collections of essays he published:
"The Lying Stones of Marrakech: Penultimate Reflections in Natural
History" (2002).
( www.amazon.com/exec/obidos/ASIN/0609807552/hip-20 )
In his essay "The Paradox of the Visibly Irrelevant" he writes
about an "urban legend" passed around by non-biologists: that no-one
has ever actually witnessed evolution in action because it's too
slow. He says in fact that biologists have lots of evidence of
evolution in action, the trouble is it's too fast to explain the
types of changes we see in the fossil record.
This led me to the reflection that journal articles and bio-tech
newsletters are focused too much on the short-term to see the big
trends. I realized that, in our civilization, it is the science
fiction writers who have the job of looking at the long view.
In fact there is a large literature of bio-tech inspired sci-fi,
including explorations of a number of "post-human" scenarios.
In a future issue of C3M I will look at some of these in detail.
=======================================================================
newsletter archives:
www.well.com/~abs/Cyb/4.669211660910299067185320382047/
=======================================================================
Privacy Promise: Your email address will never be sold or given to
others. You will receive only the e-Zine C3M unless you opt-in to
receive occasional commercial offers directly from me, Alan Scrivener,
by sending email to abs@well.com with the subject line "opt in" -- you
can always opt out again with the subject line "opt out" -- by default
you are opted out. To cancel the e-Zine entirely send the subject
line "unsubscribe" to me. I receive a commission on everything you
purchase during your session with Amazon.com after following one of my
links, which helps to support my research.
=======================================================================
Copyright 2003 by Alan B. Scrivener