as presented at Medicine Meets Virtual Reality II -- Interactive Technology and Healthcare: Visionary Applications for Simulation, Visualization and Robotics, January 27-30, 1994, at the San Diego Marriott Hotel and Marina, sponsored by UCSD School of Medicine
Alan B. Scrivener
abs@well.com
(c) 1993
The last five years have seen the emergence of a new method for
creating computer programs: visual programming, which augments
and sometimes replaces traditionally laborious programming
methods. Previously all programs were authored by keyboard-
editing source code text in a computer language. Now, by
programming visually -- interactively using a mouse to position
and connect software modules displayed on a graphics screen --
application developers are able to achieve productivity
improvements of up to two orders of magnitude, and end-users are
able to reconfigure and customize programs while they are running
without assistance from the original developers. As of this
writing at least four visual programming systems are commercially
available and widely used on workstation class machines; the
oldest and most mature of these is the Application Visual System
(AVS), introduced in 1989. Medical researchers (and others)
using this software tool have reported that it not only speeds
project completion, but in many cases has made possible
additional projects which would never have been attempted in the
first place without the productivity gains visual programming
provides. This paper describes a number of projects in medical
research which have used AVS, and the ways in which the tool has
helped them.
WHAT IS VISUAL PROGRAMMING?
A visual programming tool allows a software engineer to build a
computer application by creating a diagram on-screen, using the
mouse (or other pointing device) to position and "wire together"
basic building blocks, called modules, into an interconnected
data flow diagram. These modules may be little pieces of code
written in a traditional language (such as FORTRAN or C), or may
be themselves visually programmed from other more primitive sub-
modules. The resulting on-screen diagram then comprises the
visual program. Literally, the picture is the program. The
creator can execute this visual program at once, interacting with
it as it runs, and can save it to and restore it from a disk file
in a human-readable form. (The saved programs can also be run
in a more traditional non-interactive "batch" mode if needed.)
This new way of programming augments and sometimes replaces the
traditional method of typing characters into a file to produce a
program in some high-level language which is first compiled
(translated into machine language by a compiler program) and then
linked in with other software libraries before it can be executed
(assuming there are no syntax errors which prevent the compiler
from successfully completing the translation).
The obvious appeal of visual programming tools is that they are
fun to use and easy to understand. What is not so obvious is
that this new methodology has additional, sometimes hidden
benefits. Rapid prototyping becomes possible, making it
practical to do early testing and demonstration, and to make
corrections sooner in the development cycle if the software
developer and the end user have mis-communicated in specifying
how the program should behave. The prototype can later be
further refined to produce the final application -- early work is
not wasted. Good coding practice becomes easier to follow:
modularity is rewarded, and code becomes more reusable, easier to
debug and easier to document. Modularity makes debugging easier
, and interactive error checking -- especially for data type
mismatches -- catches many mistakes immediately, or in some cases
even prevents them. Maintenance is easier, as well as after-the-
fact user customization, because the visual programs are run-time
reconfigurable. The pace of work is much faster, since changes
made visually are immediately in effect and testable without the
need for a compilation stage. Projects which might have taken
weeks or months of traditional programming can often be completed
in hours or even minutes. (If this sounds like an exaggeration,
you should know that I've actually "fudged" this efficiency gain
factor downward to 100x -- despite my own repeated personal
experiences that it is consistently higher -- because it just
sounds so unbelievable.)
Another benefit of visual programs, originally unanticipated, is
that they are much easier to adapt to run on parallel processing
systems. A visual program preserves information about the
"dependencies" in an algorithm. This is information which the
computer must have available in order to run in parallel, across
multiple processors, without accumulating data errors. A
traditional program lacks this explicit dependency information,
and the compiler must first try to infer and recreate it, a
procedure fraught with peril, before attempting any parallel
execution.
In fact, the benefits of visual programming are so numerous and
dramatic that one might ask, "Why didn't anybody do this before?"
The answer lies partly in the fact that we have for only about a
decade had available affordable personal computers and
workstations equipped with necessary resources: bit-mapped
graphics displays, mice or other pointing devices, and enough
speed and memory to present an complex interactive visual user
interface. But the answer also lies partly in the fact that
useful breakthroughs are still rare in the area of human-machine
interface, and are always obvious only in retrospect. Some
history will help put this in context.
THE EVOLUTION OF HUMAN-COMPUTER INTERFACES
In seeking to understand the somewhat belated development of the
visual programming methodology, it is useful to look at the
evolution of human- computer interfaces. (Of course, strictly
speaking engineering designs do not "evolve" the way that
biological organisms do, but the process of successive refinement
through an engineering cycle of design and test does resemble
nature's evolutionary cycle of genetic shuffling and mutation
followed by natural selection.) Progress in advancing the design
of human-computer interfaces has proceeded for the most part
through periods of very gradual improvement punctuated by sudden
jumps in capability, often triggered by "paradigm shifts" in the
underlying concepts, models and metaphors. This halting progress
would've been vastly accelerated if only we had first developed
(or discovered) what Warren McCulloch called "a calculus of
intention," * and Michael Arbib called a "biological
mathematics." [2] This would be a notation which mirrors the way
we represent algorithms (computational recipes) in our own minds.
But today we still lack such knowledge of our own internal
representations.
* This term was passed on to me by Dr. Gregory Bateson at the
University of California at Santa Cruz around 1975; I have been
unable to find it in McCulloch's collected works. I did locate
an essay on "a logical calculus of the ideas immanent in nervous
activity," in which he asserts: "What these [computing] machines
lack by contrast with the human brain is the ability to operate
in our own natural terms." [1]
We begin instead with the original conceptual model for the
programmable digital computer, described by mathematician Alan
Turing in 1936 [3]. (During World War II the British secretly
built the first implementation of this model, with Turing's help,
and used it in the "Ultra" project for the breaking German codes.
) His original paper defined a "Turing machine" as the coupling
of three components: (1) a "finite state machine," which we would
call the Central Processing Unit (CPU) today, (2) a data storage
"tape" of arbitrary length, corresponding roughly to all of a
modern computer's main memory, disk drives, and other digital
storage media, and (3) access to an external "environment" to
provide the machine's inputs and receive the its outputs, like
today's Input/Output (I/O) sub-systems.
This "environment" for the machine was generally assumed to be
just another machine, either deterministic or selecting from a
set of random behaviors. Turing apparently gave little thought to
the implications of linking computers and human beings, at least
in his original mathematical formalisms. In the jargon of
today's aerospace engineers who design "man-rated" systems for
aircraft and spacecraft, the theory of the Turing machine does
not deal with the "man in the loop" case. (Though -- to be fair
-- later, in Turing's more philosophical writings he did propose
the "Turing test," which provides an objective measure of
artificial intelligence; when a computer can sufficiently fool a
person as to its machine identity in a session of limited two-way
interaction it is said to have passed this test.)
Many theorists since Turing also tended to think of the computer
as a system which operates linearly in three steps: input,
process and output. A look at the Scientific American special
issue on computers and communication technology for 1966 [4]
finds all of the diagrams having this same structure:
+-------+ +---------+ +--------+
| INPUT | -> | PROCESS | -> | OUTPUT |
+-------+ +---------+ +--------+
Nowhere is the output connected back to the input through any
explicit mechanism at all, let alone through something as
complicated as a human being.
But in practice computer and human were and are often connected
in a loop:
+-------+ +---------+ +--------+
+--> | INPUT | -> | PROCESS | -> | OUTPUT | >--+
| +-------+ +---------+ +--------+ |
| |
| +-------+ |
+----------------< | HUMAN | <-----------------+
+-------+
and this loop of human-machine interaction has brought its own
problems. The first of these was the discovery of the
"debugging" process (called simply "pitfalls" at the time; the
actual name came later when the U.S. Navy's legendary Grace
Hopper found a moth stuck in one of a computer's relays.) Maurice
Wilkes of Cambridge was the original programmer of the Electronic
Delay Storage Automatic Calculator (EDSAC), the world's first
operational stored-program digital computer. In his Memoirs [5]
he recalls:
By June 1949 people had begun to realize that it was not so easy
to get the program right as had at one time appeared. I well
remember when this realization came to me with full force. The
EDSAC was located on the top floor of the building and the tape-
punching and editing equipment one floor below... I was trying
to get working my first non-trivial program... It was on one of
my journeys between the EDSAC room and the punching equipment
that "hesitating at the angle of the stairs" the realization came
over me with full force that a good part of the remainder of my
life was going to be spent finding errors in my own programs.
Computer historian Martin Cambell-Kelly reports that "no evidence
has yet emerged that anyone had conceived of the debugging
problem until programs were tried on real computers." [6] At
Cambridge, EDSAC programs were initially debugged by single-
stepping through the code and examining the bits in registers and
memory, a process called "peeping." A soon as the first debugger
was written for EDSAC by Stanley Grill in 1950, "peeping" was
outlawed at Cambridge because it as so inefficient and wasteful
of the machine's valuable time.
When the first computers became commercially available in the
late 1950s to mid-1960 this sequence of events was recapitulated.
Any commercial institution with a new computer soon found that it
was very expensive to operate. Some system managers complained
that the programmers were monopolizing the computer's main
control console to do program debugging. Seduced by the
responsiveness of direct interaction with the machine, the
programmers would sometimes thoughtlessly squander expensive
computer time, and this frequently prompted the managers to ban
programmers entirely from the computer room. They would be
required instead to submit their code in hand-written form to
non-programming clerks, the first "computer operators," who would
then key in the instructions and run the jobs. Presumably these
operators would feel no temptation to tinker with programs which
they weren't trained to comprehend. But in fact, this isolation
strategy often failed when operators started learning programming
on their own. (This whole syndrome is documented in a collection
of surviving corporate memos from 1965 concerning use of a new
IBM 360, donated by The Traveler's Group to Boston's Computer
Museum, and seen by this author on display in 1990.)
To this day we still do not truly understand the dynamics of
human-computer interactions. Nobody can explain, for example,
why home video game manufacturer Nintendo became the most
successful company in Japan in 1991, supplanting even auto giant
Toyota [7].
The problem with trying to come up with a viable "man in the
loop" theory is that the human is the part of the system we know
the least about. On the machine side, a whole discipline has
sprung up -- the theory of computability -- based on Turing's
work. Consequently the concept of an algorithm is now very well
understood, as are some of the ultimate limitations of
algorithmic solutions to problems. (Godel's famous
"Incompleteness Theorem" of 1931 [8], which proved that any
useful number system must always include some theoretically
undecidable propositions, is derived directly from the theory of
computability.) But on the human side we are almost totally in
the dark. The whimsical "Harvard Law" (dealing with lab animals)
states that "under precise conditions of temperature, pressure,
humidity and all other known variables, the organism will do as
it damn well pleases." This pretty much sums up the current
state of our knowledge of our own minds as well. And without a
reliable theory of human cognition we cannot quantify how "good"
our current human/machine interfaces are; especially we have no
way to measure how "efficient" they are, i.e., how much more room
for improvement remains. (By analogy: if you have gotten part
way to a goal but don't have a map showing you the remaining
terrain -- if all you know about is your starting position and
your current position Q then you clearly can't tell what per cent
of the way to the goal you are. And to further complicate
things, in many situations, such as computer gaming, we are
hard-pressed to define exactly what the goal is to begin with.)
My personal guess is that there is a great deal of "room for
improvement" in the human-machine interface. Consider that most
types of computer input devices predate the computer itself, and
many date from the 19th century. The once- popular punch card
was developed in 1805 for the control of Jacquard looms. The
keyboard goes back to the manual typewriter, invented in 1829.
The joystick comes to us from the airplane, an 1890 innovation.
The buttons and paddles of today's video games are simply
electrical switches and potentiometers, understood since around
the 1820s. The audio speaker goes back to the 1876 telephone.
The cathode ray tube dates from 1855, and its use in television
is 1930s vintage. (In fact, so is the NTSC television broadcast
standard which we are still saddled with while government and
industry endlessly debate High-Definition TV). Even the high-
tech-sounding "asynchronous serial communications line" (as in
the still-popular RS-232 standard) and the near-universal ASCII
code (used for storing and sending alphanumeric text) both
originally come to us from an electro-mechanical device: the
teletypewriter . First invented in 1914, the old yellow-paper
"teletype" is still fondly remembered as the workhorse of
newspaper wire services, as well as for its more recent, somewhat
anachronistic use (well into the 1970's) as a cheap and reliable
hard-copy terminal for minicomputers. [9]
In fact, I can only think of a handful of human interface
peripherals which were designed expressly for computers: the
light pen (almost never used any more), the track ball and its
upside-down counterpart, the mouse, the touch-sensitive screen,
the magnetic or mechanical 3D space tracker (such as those used
in virtual reality "data gloves" and "eyephones"), and a rather
obscure knob-like 3D positioning device called "the spaceball."
(Even futuristic eye-trackers, direct brainwave interfaces, and
electric field muscle motion detectors came first from the analog
electronics world.) It is easy to imagine that there are many
revolutionary human-machine interface devices waiting to be
invented.
The few true computer peripherals we have now are still used most
often by the "computer operators" of today (now called simply
"the users"). Programmers in the main still stick to the so-
called QWERTY keyboard interface, named for the first six letters
on the top row, and designed in 1873 to deliberately slow down
typists who were jamming the then-fragile mechanisms. (Common
letter pairs are on opposite hands, vowels are towards the edges,
etc.) Most programming today is done through the bottleneck of
the antique QWERTY keyboard.
And what exactly is being typed? It is usually instructions in
some programming language. The evolution of these programming
languages has been painfully slow at times. Most of the barriers
have been conceptual. Amazingly, it took many years in the 1950s
before the (in retrospect) obvious innovation of assembly
language programming was widely adopted This was just a move
away from using pure binary machine language, "raw bits" of ones
and zeros in base two, to the slight improvement of more readable
letter abbreviations for the machine instructions. Of course
this shift was predicated on using some kind of keyboard instead
of a bank of toggle switches to enter programs. *
The initial proposals for a "high level language" were met with
skepticism. Programmers asked: who would want to trust a compiler
to write the assembly language code for them? These skeptics
were often caught up in the mental challenge of optimizing
programs for the scarce resources of their machines (which by
today's standards were incredibly slow and had tiny memories).
Assembly language gave them what seemed like total power, tedious
though it was to exercise, and they were reluctant to give that
power away.
Finally, in 1957, the first FORTRAN (short for FORmula
TRANslator) compiler was unveiled at Westinghouse-Bettis
Laboratory. According to computer legend, on 20 April of that
year the first user-written FORTRAN program was given to the
compiler, and on the first try it returned an error message
complaining of a missing right parenthesis. The author of the
offending code asked the question: "If the compiler knew what was
wrong, why didn't it fix the problem?" [10] That profound
question remains unanswered to this day.
* Surprisingly, the original EDSAC was programmed in an assembly
language, even though other, later computers were not. But, as
Cambell-Kelly explains: "It was only when a machine sprang into
life that the business of programming was seriously considered at
all. Consequently, users of many early machines were obligated
to program in pure hexadecimal [base 16] or some variant. (It is
interesting that this mistake was repeated in some microprocessor
development systems in the 1970s.)" [6]
In those days, the accepted method for writing a program was
first to manually draw a flow chart, in other words a diagram of
the flow of control, and then to translate that diagram (again
manually) into lines of FORTRAN (or other language) code. Some
of you may remember when college bookstores sold flow chart
drawing templates in the computer science section. This method
was very useful for beginners, but advanced programmers tended to
skip the step of drawing the flow chart on paper; in effect, they
kept it in their heads. But if you were trying to make sense out
of someone else's code, for instance if you had to maintain or
modify it after they were gone, a copy of their original flow
chart was very useful. Therefore software managers began
requiring that their programmers submit hand-drawn flow charts
along with completed program listings. Many programmers
resisted, viewing the drafting of the charts as "busy work."
Then someone realized that, given a finished program, the
computer could draw the corresponding flow chart automatically,
and this seemed to satisfy everyone.
Though it solved short-term procedural and morale problems, the
development of automatic flow-charting obscured some basic
issues: it involved running the natural development sequence
backwards. What was really needed was a way for the human to
draw the flow chart in a machine-readable way, and then for the
computer to generate the corresponding program code
automatically. I would classify automatic flow-charting as a
near-miss in the evolution towards visual programming.
THE EMERGENCE OF VISUAL PROGRAMMING TOOLS
Since the 1950s there have certainly been innovations in computer
languages. Relocatable object code, virtual memory referencing,
interpreted languages, overlays, multitasking, run-time loading,
and object-oriented programming leap to mind. But all of this
has occurred firmly within the original QWERTY paradigm. The
intervening decades have brought vast improvements in the
technology for presenting data to the human, especially 2D and 3D
graphics displays. But as Ivan Sutherland pointed out in his
landmark 1968 paper, Facilitating the Man-Machine Interface [11]:
The man-machine interface, of course, has to be a two-way
interface. It turns out that if you look at the rate of
information flow, the rate of flow from the computer to the man
is very high, whereas the rate at which information goes from the
man to the machine is generally much slower...
This fact, largely a consequence of the ubiquity of the QWERTY
keyboard, has held back the art of programming. Even as the
microcomputer revolution spawned the computer networking
revolution, which enabled the "windows" revolution, programmers
have still created these wonders using the old paradigm. Despite
a lot of talk about visual programming throughout the 1980s, few
examples emerged. I remember two games around 1982 for the Apple
II personal computer, Pinball Construction Set from Electronic
Arts and Sundog from FTL Games, which had something like visual
programming components. In that same time frame a product was
being aggressively marketed for the IBM PC called The Last One
which purported to generate BASIC code from a visually-defined
spec. (It was supposed to be a software tool so phenomenal that
it would be "The Last One" you ever had to buy.) It turned out
to be almost total vaporware -- all the demos were faked, and the
company soon vanished. At Rockwell International's Space Division
in 1987 I saw an internal demo of a primitive point and click
menu-builder (developed on a VAX minicomputer connected to an
archaic Tektronix graphics terminal); alas, it was never
completed or distributed. I still remember that when Jaron
Lanier first founded his ill-fated company, he rather impulsively
gave it the name "VPL," for "Visual Programming Language," but
then failed to produce any such products. By the late eighties
Haeberli [12] had pioneered the concept of a data flow network,
and demonstrated a "proof of concept" in the program ConMan,
which had limited capabilities and availability .
The first commercially available visual programming system (to
this author's knowledge) was the Application Visualization System
(AVS) version 2, in 1990. Initially AVS version 1 was released in
1989, by Stellar Computer Inc., as a point and click 3D geometric
rendering program for its high-performance workstations [13]. It
lacked any visual programming capability. A year later a team
of mostly the same developers produced VEX, the "Volume
Exploratorium," with a true visual programming interface, and
they demonstrated it at the Chapel Hill Volume Visualization
Workshop in 1989 [14]. They then grafted VEX onto original AVS to
produce a visually-programmable 3D rendering and volume
visualization system. This was released by Stellar as AVS2 . In
the intervening three years there have been four additional
software releases, bringing us to the current version, AVS 5.01,
which shipped in December of 1993. Also during this time the AVS
program has been ported to every major brand of workstation,
becoming the most successful product Stellar ever produced, and
as of the beginning of 1992 a separate software company, Advanced
Visual Systems (AVS) Inc., was "spun off," with a charter to
continue to develop and evolve visual programming tools.
Since the original release of AVS2 several competitors have
appeared on the market , including Khoros from the University of
New Mexico, IRIS Explorer from Silicon Graphics Inc. (SGI), and
Data Explorer (DX) from IBM Corp. Principal architects of most of
these programs shared the stage at a SIGGRAPH92 panel on visual
programming. Some analysts have predicted that IBM's entry into
this business will "legitimize the market." Perhaps a more
credible "legitimization" occurred when the U.S. Environmental
Protection Agency (EPA), the United States Geological Survey
(USGS), and Sandia National Labs of the U.S. Department of Energy
(DOE), all adopted AVS as their standard scientific data analysis
and visualization tool.
USES OF AVS IN MEDICAL RESEARCH
But the focus of this conference, and this talk, is on medical
research. I'd like to take you on a quick tour of a few
institutions where AVS is being used to speed the development of
the software components of projects exploring various fronts on
the leading edge of medical research. (I am familiar with these
particular examples because of my involvement with the AVS
product for the last five years as a field technical expert. Of
course, none of this research is my own; this represents a survey
of what others are doing, based on what they've published, and
they things they've told me personally that I have been permitted
to share.) Most of these groups have two things in common: they
don't have a lot of people available (especially programmers),
and they're in a big hurry. (After all, human suffering continues
to be a powerful motivator.) All of them have embraced AVS for
its ability to empower the researchers to achieve greater
software development results with fewer resources.
REHEARSING RADIATION TREATMENTS AT THE UNIVERSITY OF MICHIGAN
Researchers at the University of Michigan at Ann Arbor have been
developing treatment planning systems for radiation oncology beam
therapy for over five years now. Their starting point was an
existing 3D treatment planning system, U-Mplan, which used CT and
MR data to visualize a tumor in its geometric relationship to
planned radiation beam positions; though it had some of the
functionality they wanted (including robust dosage calculation
algortihms) it lacked the kind of 3D displays they wanted, and it
was hardware-dependent and had a monolithic, unextesible software
architecture. They wanted to enhance the capabilities of this
system to include better interactive controls, including manual
outlining of tumor boundaries on 2D slices, and ability to
manipulate a 3D view of the resulting treatment plan -- either
from an omniscient, or a beam's-eye view. They also needed to
combine displays of 2D grey-scale images and full 3D surface and
volume display methods, as well as generating traditional line
graphs. Lastly, they wanted the resulting system to run on a
wide variety of hardware platforms, to avoid the hardware
obsolescence problems that have plagued medical software systems
in the past. By basing their development efforts on AVS, they
were able to achieve these goals, quickly prototype and test the
system, and concentrate their efforts mainly on the medical
research side, instead of working on extensive software
infrastructure. The resulting system offers an improved ability
to concentrate radiation mostly at the tumor site, while
generally sparing surrounding tissue, and also allows treatment
plans to be produced and verified much more quickly. [15]
FINDING THE SHAPE OF A DEADLY VIRUS AT THE WISTAR INSTITUTE
Researchers at the Wistar Institute in Philadelphia, in
collaboration with others at European Molecular Biology
Laboratory (EMBL) in Heidelberg, Germany, were seeking to
understand the structure of the adenovirus, which is a primary
cause of death in third world children. Most children who die of
"starvation" actually perish from the dehydration caused by
diarrhea from adenovirus infection. Due its size and complexity,
the structure of the virus could not be determined from
crystallography techniques which had worked on simpler viruses.
Using a novel approach, the researchers took electron micrographs
of the viruses in sections, and then used AVS to combine them
into a 3D volume for visualization. This procedure yielded the
first pictures of the virus as a solid object. Knowing the
molecular structure of some of its proteins, the researchers have
begun to reconstruct its complete molecular structure. This
research may someday lead to a life-saving vaccine. The team
also believes the techniques they have developed may apply to
cancer and AIDS viruses. The use of AVS in the project allowed
them to concentrate on the problem, instead of the intermediate
software development requirements which might have overwhelmed
them if they'd used more traditional non-visual programming
methods. [16, 17]
HOLOGRAMS OF THE INSIDE OF THE BODY AT VOXEL CORP.
Voxel Corporation has developed the first true laser holographic
display of medical volume datasets (CT and MRI). These
"voxgrams" show the entire volume at once, instead of 2D slices
which show only a tiny portion of a volume, or 3D volume
renderings which may obscure features not in front. AVS was used
to quickly develop the interactive user interface needed in the
preparation of the data for output to the holographic camera.
[18]
PICTURING PROTEINS IN STEREO AT THE SCRIPPS RESEARCH INSTITUTE
Researchers at the Scripps Research Institute in La Jolla,
California (the largest private research institute in the U.S.)
have been studying a variety of large biological molecules at the
institute's Molecular Biology Laboratory, under the direction of
founder Arthur Olson. Part of their mission has been to expand
the use of computer graphics in molecular structure research.
[19] They have been using AVS in this mission since 1988, and it
has aided in understanding the immune system, the processes of
antibody docking, and improving procedures for drug design. [20]
One of their pet projects is the use of 3D stereo displays for
showing molecular structure. They have also pioneered the use of
a graphics technique called "texture mapping" for labeling
protein residues in molecules.
The rapidity with which the can customize AVS for their uses was
dramatized recently when they were asked by Scientific American
to produce a cover illustration of a radical interceptor
antioxidant molecule to accompany a survey article on the
chemical basis of aging, and they were able to respond short
order. [21]
DAY-GLO CELLS IN A 3D MICROSCOPE, RECOGNIZING MONKEY BRAINS, AND
MAPPING THE CHAOS IN A HEARTBEAT AT THE UNIVERSITY OF CALIFORNIA,
LOS ANGELES
Many separate workgroups at UCLA are using AVS; some of them
since it was first released. Researchers there can buy it
through a specially-priced campus program, and today the UCLA
campus represents the largest installed base of AVS users
outside of U.S. government labs. Of the eighteen (or more)
software licenses on campus, at least three copies in three
different departments are devoted exclusively to medical research
.
A team of researchers lead by Andy Jacobson is exploring confocal
microscopy with a patented technique using fluorescent dyes to
highlight features of cell metabolism. The results of this
research are not yet published, but follow in the footsteps of
some prior work which cited in the notes. [22] Meanwhile,
Jacobson has been experimenting with the volume rendering of
confocal data, and sharing the resulting images with other
researchers at UCLA.
Kelby Chan of the Department of Radiology has been experimenting
with the feasibility of fully digital 2D radiographs for some
time using AVS. More recently, he has developed a new method for
semi-automatic tissue segmentation of volume data, using neural
net methods. [23] Segmentation is the process of identifying
which portion of a volume of data corresponds to what anatomical
features, and has proved to be a very difficult problem to
automate. This new method involves a clinician identifying a
point which is within a certain structure, and the software then
guessing which other points in a volume are part of the same
structure. Early tests of the method have been done on 3D MRI
scans of monkey brains, segmenting out the brain tissue from
skull, skin, and other soft tissues. AVS has permitted the
research to be focused on the development of new algorithms,
while the problems with more mundane software integration issues
-- including migrating to a new hardware platform part way
through the project -- were kept to a minimum.
Dr. Alan Garfinkel studies applications of mathematical tools,
especially chaos theory, to biological systems. He first
acquired the AVS software for use in 3D volume rendering of CT
and MRI scans as well as confocal microscope data. But once he
saw the power of the tool, he began to use for other projects as
well. He has been hampered by the fact that for the last several
years, every time he's got an undergraduate trained in the
programming and operation of his computer systems that
undergraduate has been accepted into a medical school in another
city and Dr. Garfinkel has had to start over with another
programmer. (This cycle has continued yearly as long as I have
known him.) He says that the comparative simplicity and
modularity of visual programming in AVS has made it possible for
him to bring each new programmer up to speed in time to make much
new forward progress before the next iteration occurs.
One of the projects he applied AVS to was a kineseology study,
comparing the gaits (patterns of walking) in normal and impaired
persons. Using AVS to visualize joint angles in a phase-space
representation, he found that an impaired walker's gait has a
lower dimension (in a fractal sense). This adds evidence to the
emerging generalization that healthy biological systems are more
chaotic than sick ones. (Unfortunately, this research has not
yet been published.)
More recently, Dr. Garfinkel has collaborated with others in the
first demonstration of real-time control of cardiac tissue
fibrillation, using the mathematics of chaos theory. [24] AVS
was used to make phase portraits of the data from these
experiments, using techniques similar to those developed for
displaying the gait data. One of these phase portraits was
featured on the cover of Science Weekly in association with a
survey article on controlling biological chaos. [25] The
research may eventually make possible the design of much more
effective and lower power-consuming cardiac pacemakers.
CAPTURING CHAOTIC TREMORS WITH A VR DATAGLOVE, SEEING THE SHAPES
OF BRAINWAVES IN SPACE-TIME, AND THE THERAPEUTIC USE OF DIGITAL
BUTT-PRINTS AT LOMA LINDA UNIVERSITY MEDICAL CENTER
With a grant to study therapies for patients with Parkinson's
disease, researchers at the Department of Neurology (since moved
to Rehabilitation Therapy) at Loma Linda University Medical
Center first obtained AVS for analyzing electroencephalograph
(EEG) data ("brainwaves") taken from these patients. Initially
they used AVS to develop and refine algorithms for measuring the
auto-correlation of these recorded signals, phase-plotting a data
steam against a time-lagged copy of itself to give a rough
approximation of the "fractal dimension" of the data. Confirming
the results of Chinese investigators [26], the Loma Linda team
found that data recorded from a Parkinson's patient during a
tremor has a lower fractal dimension than data taken while the
patient is in a more normal neurological state.
Meanwhile, through aggressive networking and self-promotion,
these researchers convinced VPL Corp. to donate one of their
second-generation DataGlove devices to the lab. This novel input
peripheral, developed for use in Virtual Reality (VR)
applications, provides three dimensions of hand position (X, Y
and Z), and hand orientation (roll, pitch and yaw), as well as
the flexion and extension angles of the ten major finger joints,
all delivered to the computer as an incessant stream of numerical
data up to 60 times per second. Hooking the device up to their
Parkinson's patients, the team became the first to use the
DataGlove as a true medical instrument which records raw
physiometric data, instead of for its primary design function as
a purely volitional mechanism for interacting with a VR
application through deliberate pointing and gesturing.
Once the hand motion data was acquired from a patient, it was
easy for the investigators to use their existing visual programs
in AVS to apply the same phase-lag self-similarity tests to the
new data. In addition, their staff mathematician easily
programmed new AVS modules to look for correlations between the
hand motion data and EEG data taken simultaneously, and to
compute the Lyapunov exponents of the data streams (a measure of
the degree of mathematical "chaos" in a system). Another more
prosaic application of this technology was simply to record the
hand motions for later playback, to aid clinicians in evaluating
the efficacy of treatments and rehabilitation efforts spread over
many months [27, 28].
In addition to the above work, some of which was part of their
original research plans, this team also was able to do some
exploratory programming, looking for new ways to visualize their
data. Because of the ease of modifying AVS applications, a
number of "visual experiments" were tried, one of which resulted
in a useful new method for displaying and interpreting EEG data.
An AVS module took the channels of data coming from the
electrodes on the scalp and interpolated them -- based on the
conventional electrode positions -- into a two-dimensional grid
of values. The time steps over which the data was collected
formed a third dimension, yielding a 3D volume of space-time data
from each EEG recording session. Several standard 3D
visualization techniques were applied to these data volumes,
including isosurfaces (surfaces of constant value), scattered
spherical "bubbles" of varying size, and 3D gradient vectors, all
brightly color-coded. The resulting displays made features such
as alpha-waves much more identifiable and detailed, and provided
new extensive visual cues as to how the waves spread over the
brain surface, in comparison to the traditional method of
plotting parallel 2D strip charts in black and white, which made
little use of the spatial information implicit in the data. (A
complete treatment of this innovation has not yet been published,
though the researchers have mentioned it in passing at
conferences where other research was presented. [29])
Another fortuitous avenue of research opened up when the team
obtained digital pressure sensor sheets, another novel computer
input device. With funding from a supplier of wheelchair
cushions, they embarked on a study which yielded the first
quantitative physiometric analysis of the physical pressures
which cause "chairsores" in wheelchair-bound patients. Time
sequences of data collected with the pressure sensors
(affectionately nick- named "butt-print" data) was imported into
AVS for visualization as space-time volumes in the same way as
the EEG data, except that no interpolation was needed since the
data was already in a grid as it came from the sensor arrays.
Some of the previous work done to explore the EEG volumes was
applied to the new pressure sensor volumes with good results.
The isosurface technique proved especially useful, because
intervals of unchanging pressure (which can cause chairsores)
appeared dramatically as straight-sided shafts of color in the
space-time volumes [30, 31].
Over time, an increasing fraction of the work done at Loma Linda
has been opportunistic, as new technologies have become
available. The flexibility of AVS's visual programming interface
has allowed the researchers to adapt to these opportunities
quickly, without imposing a crushing software development burden
as they race to keep up with advances in the hardware of
physiometric data gathering.
REFERENCES
1. Warren S. McCulloch and Walter H. Pitts (Massachusetts
Institute of Technology) A Logical Calculus of the Ideas
Immanent in Nervous Activity Chapter 2 of Embodiments
of Mind MIT Press 1965
2. Michael A. Arbib (University of New South Wales) Brains,
Machines, and Mathematics McGraw Hill Book Company 1964
3. Alan M. Turning On Computable Numbers with an Application
to the Entscheidungsproblem Proceedings of the London
Mathematical Society, series 2 42 1936 pp. 230-265
(corrections in 43 1936-37 pp. 544-546)
4. The Editors of Scientific American Information Scientific
Amer. Press 1966
5. Maurice V. Wilkes Memoirs of a Computer Pioneer MIT Press
1985
6. Martin Cambell-Kelly The Airy Tape: An Early Chapter in the
History of Debugging IEEE Annals of the History of
Computing 14(4) 1992
7. David Sheff Game Over: How Nintendo Zapped an American
Industry, Captured Your Dollars, and Enslaved Your Children
Random House, New York 1993
8. Kurt Godel Uber formal unentscheidbare Satze der Principia
Mathematica und verwandter Systeme Monats. Math. Phys. 38
1931 pp. 173-198
9. Alexander Hellemans and Bryan Bunch The Timetables of
Science: A Chronology of the Most Important People and
Events In the History of Science Simon and Schuster 1988
10. Deborah Sojka and Philip H. Dorn Magical Moments In Software
Datamation ____ 1981 pp. 7-16
11. Ivan Sutherland (Harvard University) Facilitating the Man-
Machine Interface Chapter III Section 1 of PURPOSIVE SYSTEMS
Proceedings of the First Annual Symposium of the American
Society for Cybernetics von Foerster, White, Peterson,
Russell, eds. (University of Illinois) Spartan Books 1968
12. P. Haeberli ConMan: A Visual Programming Language for
Interactive Computer Graphics Computer Graphics 22(4)
August 1988
13. Craig Upson, Thomas Faulhaber, Jr., David Kamins, David
Laidlaw, David Schlegel, Jeffrey Vroom, Robert Gurwitz,
Andries van Dam (Stellar Computer Inc.) The Application
Visualization System: A Computational Environment for
Scientific Visualization IEEE Computer Graphics and
Applications July 1989
14. Larry Gelberg, David Kamins, Jeff Vroom (Stellar Computer
Inc.) VEX: A VOLUME EXPLORATORIUM An Integrated Toolkit
For Interactive Volume Visualization Chapel Hill Volume
Visualization Workshop 1989
15. Marc L. Kessler, Leo R. Catallo, Dan L. McShan (University
of Michigan, Ann Arbor) Design and Simulation of Conformal
Radiation Therapy Using AVS International AVS User Group
Conference (proceedings) 2(1) May 1993
16. R. M. Burnett, P. L. Stewart, S. D. Fuller (Wistar
Institute) and C. J. Mathias (Stellar Computer Inc.)
Image Reconstruction of Adenovirus: A relatively new
technique may be better suited than traditional methods to
study the structure of this complex virus Pixel: The
Magazine of Scientific Visualization 2(2) July/August 1991
17. Craig J. Mathias (Stellar Computer Inc.) [describing work of
Phoebe L. Stewart and Roger M. Burnett of the Wistar
Institute, Philadelphia, PA, and Stephen D. Fuller of the
European Molecular Biological Laboratory, Heidelberg,
Germany] Visualization Techniques Augment Research into
Structure of Adenovirus Scientific Computing and Automation
7(6) April 1991 pp. 51-56
18. M. Dalton, S. Hart, A. Wolfe, M.D. (VOXEL Corp.)
Holographic Display of Medical Image Data International
AVS User Group Conference (proceedings) 2(1) May 1993
19. Arthur J. Olson and David S. Goodsell (Scripps Research
Institute) Visualizing Biological Molecules: Computer-
generated images are aiding research in molecular structure
and helping to elucidate the complex chemistry of life
Scientific American 267(5) November 1992 pp. 76-81
20. Bruce S. Duncan, Michael Pique, Arthur J. Olson (Scripps
Research Institute) AVS For Molecular Modeling
International AVS User Group Conference (proceedings) 2(1)
May 1993
21. Gloria E. O. Borgstahl, Hans E. Parge, Michael J. Hickey,
Robert A. Hallewell, John A. Tainer (Scripps Research
Institute) Radical interceptor: this antioxidant may be one
of the body's defenses against aging (cover illustration)
Scientific American 267(6) December 1992 front cover and
p. 8 (for article on pp. 130-141)
22. Olaf H. Weiner, R. Albrecht, B. Muhlbauer, H. Schwarz, M.
Schleicher, A. A. Noegel (Max-Planck-Institut fur
Biochemistry) The Actin-binding protein comitin is a
ubiquitous Golgi component: Analysys of its distribution by
confocal microscopy Journal of Cell Science 1993
23. K. K. Chan, A. S. Hayrapetian, C. C. Lau, R. B. Lufkin
(University of California, Los Angeles, Department of
Radiology) Neural Network based Segmentation System
Proceedings of the Society of Photo-optical Instrumentation
Engineers (SPIE) 1898(1) 1993 pp. 604-608
24. Alan Garfinkel (University of California, Los Angeles),
Mark L. Spano, William D. Ditto, James N. Weiss
Controlling Cardiac Chaos Science 257(5074)
28 August 1992 pp. 1230-1235
25. Ivars Peterson and Carol Ezzell [survey of the work of a
number of groups] Crazy Rhythms: Confronting the complexity
of chaos in biological systems Science Weekly 142(10)
pp. 156-159 5 September 1992
26. Xu Nan and Xu Jinghua (Academia Sinica, Shanghai, China)
The Fractal Dimension of EEG As a Physical Measure of
Conscious Human Brain Activities Bulletin of Mathematical
Biology 50(3) 1988 pp. 559-565
27. David Warner A Helping Hand: Computer Graphics Tools Lend
a Hand to Victims of Parkinson's Disease Computer Graphics
World October 1991
28. David Warner [describing the research of Dr. Doug Will,
Dave Warner, Stephen Price, Jeff Sale, Jodie Reed and Jan
Henderson at Loma Linda University Medical Center] Medical
Rehabilitation, Cyber-Style: Loma Linda's Advanced
Technology Center puts virtual reality spin-offs to work in
the clinic Virtual Reality Special Report July 1992
29. E. Jeffrey Sale, A. Douglas Will, M.D., Jeffrey M. Tosk,
Stephen H. Price (Loma Linda University Medical Center)
Modifications of a Model of Chaotic Dopamine Neurodynamics
SIAM Conference on Applications of Dynamical Systems
October 15-19, 1992 (no proceedings published!)
30. S. H. Price, D. J. Warner, E. J. Sale, A. D. Will, M.D.
(Loma Linda University Medical Center) Visualizing
Physiological Data -or- How To Teach Doctors New Tricks
With AVS International AVS User Group Conference
(proceedings) 2(1) May 1993
31. Janet L. Henderson, MD, OTR, Stephen H. Price, BS, Murray E.
Brandstater, MBBS, PhD, Benjamin R. Mandac, MD (Loma Linda
University Department of Physical Medicine and
Rehabilitation) Efficacy of Three Measures to Relieve
Pressure in Seated Persons With Spinal Cord Injury
Archives of Physical Medicine and Rehabilitation (preprint;
accepted for publication in early 1994)
Return to to ABS's home page.