inkwell.vue.469 : Jon Peddie - The History of Visual Magic in Computers
permalink #0 of 26: Ted Newcomb (tcn) Mon 15 Jul 13 08:13
    
We are delighted to have Jon Peddie with us to discuss his new book
The History of Visual Magic in Computers. Dr. Jon Peddie is one of the
pioneers of the graphics industry, and formed Jon Peddie Research (JPR)
to provide customer intimate consulting and market forecasting
services. Peddie lectures at numerous conferences on topics pertaining
to graphics technology and the emerging trends in digital media
technology. Recently named one of the most influential analysts, he is
frequently quoted in trade and business publications, and e is also the
author of several books including Graphics User Interfaces and
Graphics Standards, High Resolution Graphics Display Systems,
Multimedia and Graphics Controllers, a contributor to Advances in
Modeling, Animation, and Rendering, a contributing editor of The
Handbook of Visual Display Technology, and a contributor to State of
the Art in Computer Graphics, Visualization and Visual Analytics. His
most recent book is, The History of Visual Magic in Computers.
  
inkwell.vue.469 : Jon Peddie - The History of Visual Magic in Computers
permalink #1 of 26: Ted Newcomb (tcn) Tue 16 Jul 13 16:41
    
Our moderator, David Duberman, has been involved with creative
computing and multimedia as a user, journalist, and writer since the
early '80s. He hasedited various publications including Antic, Video
Toaster User,and Morph's Outpost on the Digital Frontier. He currently
works documenting 3D software for Autodesk.
  
inkwell.vue.469 : Jon Peddie - The History of Visual Magic in Computers
permalink #2 of 26: Ted Newcomb (tcn) Tue 16 Jul 13 16:43
    
This should be great guys, thanks for doing this. Before you both get
started I would like to ask the "dummy" question and get it out of the
way, before you get technical.

Jon, exactly how do 1's and 0's become colors? 
  
inkwell.vue.469 : Jon Peddie - The History of Visual Magic in Computers
permalink #3 of 26: jon peddie (jonpeddie) Tue 16 Jul 13 18:22
    
All digital computers (which include smartphones, PCs, tablets, and
game consoles) use the binary system to calculate, communicate, read
sensors (including camera sensors), and display the results. The value
or weight of each bit (which is set at either a one or a zero)
increases by the power of two, so a series of bits such as 111001 would
represent the decimal value 39 (1+2+4+0+0+32). The screen (of your TV,
phone, PC, or tablet) has three primary colors: red, blue, and green
(RGB). Each one of those colors can be set at an intensity level or a
brightness. So red 11111111 would be very bright, full on red, and
00000000 red would be totally off, or black. 
The three primary colors can be mixed, added together, at various
levels of brightness and create all the colors in the rainbow. The
screens in our devices are LCDs (which stands for liquid-crystal
display) and they are like a valve, that can be wide open or shut and a
zillion levels in between. The number of levels is their bit-size, and
LCDs have 8-bit primary color capabilities. That means there are 2 to
the 8 levels of brightness for each primary color—or 256 levels of
brightness. And here’s where it gets interesting. When you combine
those 256 levels, you can generate 16 million combinations of colors
and brightness. 
So you have 8-bits, per primary, and three primaries. And if you set
each primary at max (all 8-bits on – 11111111) you generate white, and
all off would be black, and if the  red and green were off and blue on,
the screen would be blue, and you could vary the blue to get 256
shades of it. 
The next thing to know about are the pixels. An HD TV has about 2
million (1920 x 1080) of them. Each pixel, so small you can’t easily
see an individual one, has a RGB value—24-bits, 8 for red, 8 for green,
and 8 for blue. So each pixel can be set to 16 million different
colors and intensities. That means there are 2 million times 16 million
possible color and intensity levels on your HDTV screen—that’s a
staggering 34.7 trillion possible combinations. Not all unique, but
some combinations of 16 million. 
So the three dimensions of your screen are the x-pixel, times the
y-pixels, times the intensity or z-depth. 
But wait—it gets even more mind shocking – our displays are actually 4
dimensions—they change with time, very short amounts of time. The TV
screen changes (in the US) 60 times a second. So now you have 35
trillion pixel combinations 60 times a second. 
And all that astonishing activity is what makes the images that you
and I look at on our PCs, tablets, and TVs every day, and just take for
granted. I think it is amazing, simply amazing—it’s magic. 
  
inkwell.vue.469 : Jon Peddie - The History of Visual Magic in Computers
permalink #4 of 26: Ted Newcomb (tcn) Wed 17 Jul 13 08:48
    
Wow, I don't remember which film director said it, but it was
something to the effect of "now that we have CGI we can do anything".
Easy to see why with your explanation above.
  
inkwell.vue.469 : Jon Peddie - The History of Visual Magic in Computers
permalink #5 of 26: David Duberman (duberman) Wed 17 Jul 13 09:34
    
Hi Jon,

Thanks for taking part in this conversation. I've been aware of your
work and valuable contributions to the field for quite some time, and
am quite impressed with your credentials.

I'm going to start by asking a bunch of random questions; feel free to
answer whichever tickle your fancy and ignore the rest.

What is your favorite thing about CGI, and conversely, what's your
least favorite aspect of the field? What do you feel most needs
changing, if anything?

Which are your favorite CGI applications, and why?

What will it take to overcome the uncanny valley, in terms of
software, hardware, and creative ability?

What are the ethical ramifications of being able to simulate reality
absolutely convincingly? What could go wrong? What could go right?
What's your take on stereoscopy? It seems as though the push by TV
manufacturers to popularize stereo TVs has failed miserably. Why did
that happen, and what will it take to make it a major trend? Is it
inevitable?

It's in the interest of those of us who are excited about the creative
potential of 3D to get the tools into more hands. What in your view
might be some more-effective ways to get more of today's youth into
producing with 3D?

For young people who are already interested in working in the CGI
field, what is your advice for getting started?

Are you familiar with the Leap Motion Controller, and if so, how do
you think something like that (highly accurate and responsive motion
tracker) can affect the future of 3d content creation? 

Thanks!
  
inkwell.vue.469 : Jon Peddie - The History of Visual Magic in Computers
permalink #6 of 26: jon peddie (jonpeddie) Wed 17 Jul 13 13:57
    
Hello David
Thank you for the kind words, and the challenging questions. I’ll
answer them in-line and report the question for clarity.

What is your favorite thing about CGI, and conversely, what's your
least favorite aspect of the field? What do you feel most needs
changing, if anything?

JON – my favorite thing about CG is what I like to call Peddie’s first
law: In Computer Graphics too much is not enough.

What that means is in CG we are trying to create a world that is
enchanting and believable. It’s a double-edged sword. One side we want
to create an absolutely perfect image that can’t be distinguished from
a photograph, and we’ve done that using what is called physically
accurate ray-tracing. Today 99% of the images you see of an automobile
are done in the computer using ray-tracing, and they are, literally,
perfect. For biological entities, like a human face, we use multiple
cameras (and lights) and create a light-field map or image. These
calculations even take into consideration the properties of light as it
penetrates your skin, scatters in your subdural layers and re-emerges
in a colored diffuse manner. It’s too bad I can’t insert a picture
here, this is such a visual topic it’s a bit constraining to have to do
it all in words. In any case the goal is get the viewer to suspend
disbelief, to allow themselves be or get immersed in the scene and the
story. 

The flip side of photorealism is animation. Here you don’t want
photorealism (think about the latest Pixar/Disney  or Dreamworks movie
you’ve seen). But, the imagery has to be pleasing. It can’t be too
disproportional or disturbing. In animation you not only have to create
interesting characters, scenery, and story, you also have to create
physical rules—do things bounce, break, reflect, echo, etc.  

So the answer to the question (what is my favorite thing about CG) is
that we are never done, there is no end in sight. A small anecdote—Jim
Blinn, one of the fathers of CG has “Blinn’s Law”— which states that as
technology advances, rendering time remains constant because rather
than using improvements in hardware to save time, artists tend to
employ it to render more complex graphics. So therefore, we’re never
going to be done, we’ll just get better and better images.

Which are your favorite CGI applications, and why?

JON – Games, which are entertaining examples of a simulation.
Simulations like flight simulators, animations and cartoons  (when done
correctly), and ray-traced images of houses and buildings, cars,
boats, and bridges. I also admire what are known as “paint”
programs—programs that allow an artist to create images just as if she
or he were using a pencil or brush or touch-up device.  Why? Because I
think what attracts people to CG is the art, the pretty pictures, and
that leads to a curiosity of how was it done, and then you’re hooked.


What will it take to overcome the uncanny valley, in terms of
software, hardware, and creative ability?

JON – More processing power.  But first we should explain what the
uncanny valley is. As you make an image of a person more realistic you
leave the area of total falsification (such as a cartoon) and approach
photorealism there is a point in between where the character is neither
cute and cartoonish, or believable. And when that point is reached the
character looks creepy, scary, or disgusting. There was a dancing baby
animation and the baby didn’t move realistically and his face was too
old for a baby’s body—it was totally creepy. So the way to avoid it is
to have a perfect model (the 3D wireframe construction under the skin
that gives the object/person their form) and the best lighting (which
includes shadows and secondary shadows) and physical proportions (we
like eyes a certain distance apart, and above the nose). 


What are the ethical ramifications of being able to simulate reality
absolutely convincingly? What could go wrong? What could go right?
What's your take on stereoscopy? It seems as though the push by TV
manufacturers to popularize stereo TVs has failed miserably. Why did
that happen, and what will it take to make it a major trend? Is it
inevitable?

JON – That’s a lot of question, I’ll parse it. The down side of
simulating reality is that you can’t trust a photograph any more. When
you see a photograph of say a two headed chicken, you can’t be sure its
real or a CG image (commonly called “Photoshopped” after Adobe's award
winning Photoshop software). 

Stereo is a challenge and a technology I am really attracted to—if
only it worked. Here again it’s an issue of computer power. We know
what the issues are in Stereo (which I like to abbreviate as “S3D”).
The subject and technology has been studied for over a century. The
problem is, to do it right is very expensive. The cost begin at the
content creation side. Avatar the most successful S3D movie ever made,
actually had very light S3D in it. Cameron used S3D as one of his
tools, not the end result. The problem is too many, maybe most, content
creators can’t get the novelty idea out of their heads, and lack the
creativity to use S3D to enhance, instead of shock. That is the biggest
barrier we face—lack of creativity in the use of S3D. The second part
is the mechanics of manipulating the eyes so as to trick the brain into
seeing – AND FEELING depth. The mathematics of S3D is not a simple
geometric solution. Just as lighting is very complex when trying to
create a photorealistic image, depth is just as complex and has on top
of it all the lighting issues. The clues we get subconsciously when
viewing a scene are so subtle, and so insidious they are almost
impossible to replicate. For example, how is it possible for a person
with just one eye to perceive depth? And yet they do. I overly
enthusiastically predicted that by 2015 all smartphones would be S3D
capable. I may have to eat those words. I think S3D is the next thing
after color with regard to viewing a screen. That’s probably true, but
the path to that world is not clearly understood yet.


It's in the interest of those of us who are excited about the creative
potential of 3D to get the tools into more hands. What in your view
might be some more-effective ways to get more of today's youth into
producing with 3D?

JON – Ask them, or yourself, to explain how and why you see depth.
Look at something(s)  that is no more than three feet away. Understand
its depth. Pretend you’re describing it to a blind person. What makes
the depth visible. The tool is you, not the computer, not the screen,
they are just mechanical aids. You have to understand it before you can
replicate it.


For young people who are already interested in working in the CGI
field, what is your advice for getting started?

JON – Computer graphics is multifaceted. You can enter the field via
engineering and design computers, and/or software. You can build
displays, simulators, and CAVEs. You can be an artist. All of those
skills and talents are used in CG, and more. There’s a home for almost
any creative person in CG, and there are opportunities for all. But you
can’t just do what everyone else has done or is doing. You have to
(try and) bring something new, fresh, exciting to the field. 

Are you familiar with the Leap Motion Controller, and if so, how do
you think something like that (highly accurate and responsive motion
tracker) can affect the future of 3d content creation? 

JON – Imagine you were modeling with clay. Now imagine a program that
will create a trail of where your move your fingers and hands. Think of
software as clay, and you can move your hands to mold the image, the
shape you want, and easily erase or change it with no consequence other
than your lost time. Leap is a tool to connect your imagination with
your hands. (maybe I should try and sell that to them as a marketing
slogan   ).
  
inkwell.vue.469 : Jon Peddie - The History of Visual Magic in Computers
permalink #7 of 26: Ted Newcomb (tcn) Thu 18 Jul 13 08:58
    
Where are we evolving with all this technology - WRT augmented
realities, virtual realites, and what is being termed fractal or hybrid
realities? 
  
inkwell.vue.469 : Jon Peddie - The History of Visual Magic in Computers
permalink #8 of 26: Ted Newcomb (tcn) Thu 18 Jul 13 10:20
    
Short external link to this conversation is
http://tinyurl.com/mvol365

If you're not a member of the WELL, but you'd like to participate,
send comments or questions to inkwell@well.com and we'll post them
here.
  
inkwell.vue.469 : Jon Peddie - The History of Visual Magic in Computers
permalink #9 of 26: David Duberman (duberman) Fri 19 Jul 13 10:46
    
Great answers, Jon; thanks!

Couple more questions:

Can you take us back to the early days of CG and tell us how it all
got started?

And at the other end of the time spectrum, coming up next week is
SIGGRAPH, the annual biggest computer-graphics conference in the world.
I understand you'll be attending: What sorts of things are you most
looking forward to seeing at the show?
  
inkwell.vue.469 : Jon Peddie - The History of Visual Magic in Computers
permalink #10 of 26: jon peddie (jonpeddie) Fri 19 Jul 13 11:07
    
The "realities" - one of the driving forces behind CG is to create
different realities. We do it in the movies and take you away from the
real world for two to three hours. We do it in games and you can spend
hundreds of hours exploring strange and magical worlds.
The environment we do it in has different names which are used to
describe the hardware. In a virtual reality environment you are
immersed and can only see, and hear, and sometimes feel the virtual
world. Your vision is blocked and forced into either a headset (like
the new Oculus) or a room like a CAVE. Virtual reality is interactive,
you can move things, poke at them, get a reaction from them. A good
flight simulator puts you in a cockpit mockup and you literally fly the
airplane—and if you don’t do it right you crash. The magic is if you
have suspended disbelief (which is the definition of “a good
simulator”) when you crash that airplane you will have a major
emotional experience. I’ve had those experiences and they are
terrifying at the time and exhilarating afterwards. Going to the movies
is a for of virtual reality, you are immersed, you suspend disbelief
and get involved emotional with the story, but you do not have any
interactivity, you are a voyeur. However, you may scream and root for
the hero, or weep for the victim.
In between passive the passive VR of the movies, and the totally
immersive VR of a CAVE or encompassing glasses, you have augmented
reality. This is where you wear a set of glasses which allow you to
see, touch, feel, smell, and hear the real world, and there is a
computer-generated image or text superimposed on the glasses—an
overlay. I like to say “AR lets you see things that aren’t there.” For
example, wearing AR glasses you can look at a building and see the
rooms inside it, where are the doors, and the equipment. You can look
at an engine, see the parts inside, or at a person, and see the parts
inside. You can also have real-time translations allowing you to look
at sign in a foreign country and see on your glasses the sign
translated into your native language. 
Computer graphics is all about altering your reality. 
  
inkwell.vue.469 : Jon Peddie - The History of Visual Magic in Computers
permalink #11 of 26: jon peddie (jonpeddie) Fri 19 Jul 13 11:37
    
Can you take us back to the early days of CG and tell us how it all
got started?
And at the other end of the time spectrum, coming up next week is
SIGGRAPH, the annual biggest computer-graphics conference in the world.
I understand you'll be attending: What sorts of things are you most
looking forward to seeing at the show?
=======================
JON – Here again I wish I could use pictures. The book BTW is loaded
with pictures, as a book on CG should be…… 
I trace the development of CG to the 1950s and early 1960s, and say
that we got started in CG thanks to the Soviets scaring the hell out of
the US government. 
War and the military’s need for advanced weapons and protection has
been one of the major sources of new technological development
throughout history, and the cold war was no exception. Had it not been
for the cold war the development of computer graphics might not be
where it is today.
The US and its allies were wary of the Soviets since the end of the
European war in 1945. Then when the Soviet Union detonated a nuclear
bomb they called “First Lightning” in August 1949 it heightened the
fear (some say panic) in the US and its allies.
The Soviet Union launched its first intercontinental bomber, in 1949.
The Tu-85 was a scaled up version of the Tu-4 and a copy of the US B-29
Superfortress used over Japan. Even though it was deemed to be
inadequate against the new generation of American all-weather
interceptors, it frightened the US military and administration. The
Soviets would go on to build bigger longer range strategic bombers,
such as the TU 95 and the US deployed the long range (and long life’d)
B52.
The thinking of the time was the Soviets would fly due north over the
north pole and down across Canada to bomb the US – that would have been
the shortest most direct route requiring minimal inflight refueling
and fighter escort. Therefore, the US military reasoned, if the US
deployed an early warning system at the Arctic Circle they could detect
the Soviet bombers in time to intercept them and shoot them down.
Furthermore, although bombers flying at very low altitudes could escape
normal RADAR detection they could not escape RADAR at the Arctic
Circle that was looking at the edge of the earth.
A few years before the outbreak of the cold war, MIT had developed the
Whirlwind computer, and it was the first digital computer capable of
displaying real time text and graphics on a video terminal, which at
this time was a 5-inch oscilloscope screen.
A military-grade version of the MIT Whirlwind computer was
commissioned called the AN/FSQ-7 and developed for the US Air Force
SAGE project. To manage that project MIT’s Lincoln Laboratories created
the spin out MITRE Corporation in 1958. 
The AN/FSQ-7 was a huge machine, probably the largest computer ever
built, and it is likely to hold that record for a very long time. It
used 55,000 vacuum tubes, about ½ acre (2,000 m²) of floor space,
weighed 275 tons,  and used up to three megawatts of power.
SAGE became operational in 1958 and by early 1960s it had 22 advanced
command and control nodes installed. However, the question being asked
at the time was could the US build its early warning air defense system
faster than the USSR could built its long-range bomber fleet – the
fear was the USSR would build so many bombers they would simply
overwhelm US air defenses - a theme that still is discussed today about
intercontinental missiles.
The SAGE center computers collected the tracking data and sent it to
the Situation Display (SD). The SD was a large (19-inch) round
cathode-ray tube that used randomly positioned beams (now called
“vector graphics”) to draw icons images on the screen. SD console
operators at the center could select any of the “targets” on the
display with a light gun, and then display additional information about
the target. Each center supported up to 150 operators. The SD operator
console was even equipped with an integral cigarette lighter and
ashtray so the operators wouldn’t have to leave their station to take a
smoke break, seems quaint in today’s no-smoking world.
A similar system was built at MIT and used for research. And it was
there Ivan Sutherland developed his famous Sketchpad project in 1963,
but over a decade of work computer graphics had gone on before
Sutherland ever got to the MIT campus, thanks to the Soviets scaring
us.
============================
SIGGRAPH is the place of wonder for us CG geeks, it’s Oz. I suggest
you go to the web site http://www.youtube.com/user/ACMSIGGRAPH and then
look at the 2013 videos (and everything else.  You really have to SEE
this stuff.
  
inkwell.vue.469 : Jon Peddie - The History of Visual Magic in Computers
permalink #12 of 26: jon peddie (jonpeddie) Fri 19 Jul 13 11:38
    
One last comments on SIGGRAPH - nothing you see at the web site
http://www.youtube.com/user/ACMSIGGRAPH  is real NONE OF IT IS REAL!!!
  
inkwell.vue.469 : Jon Peddie - The History of Visual Magic in Computers
permalink #13 of 26: Rob Myers (robmyers) Fri 19 Jul 13 14:23
    
Do you remember the first time you saw a computer image and thought it
was real?

Do you remember the first time you were sat in a physical space and it
set off your "this is badly modelled|textured" sense?
  
inkwell.vue.469 : Jon Peddie - The History of Visual Magic in Computers
permalink #14 of 26: Ed Ward (captward) Sat 20 Jul 13 12:21
    
Incidentally, Jon, you actually can use pictures here, as long as they
exist on a website. One merely goes <img src="URL"/, where URL is the
URL of the graphic. 

This ancient system *can* be made to perform new tricks!
  
inkwell.vue.469 : Jon Peddie - The History of Visual Magic in Computers
permalink #15 of 26: Craig Maudlin (clm) Mon 22 Jul 13 10:45
    
I'm enjoying this book -- following the various paths of development
from their earliest beginnings all the way to the present day gives
a strong sense of the trend lines that may be dominating the near
(and possibly more distant) future.

This makes me want to ask about what you see happening in the market
now. You have a great figure on page 233 (Fig. 6.18) that depicts
the evolution of graphics controllers along with the expansion and
then contraction of the number of suppliers.

Do you see trends active today that might give rise to another
approach to graphics controllers that might replace 'GPU Unified?'
Or do you see controller technology evolution as 'settling-down'
with current architectures?

What about the impact of the mobile market? Are the needs there
sufficiently different that they might take us in a new direction?
  
inkwell.vue.469 : Jon Peddie - The History of Visual Magic in Computers
permalink #16 of 26: Stoney Tangawizi (evan) Mon 22 Jul 13 14:23
    

Fascinating discussion, Dr. Jon Peddie.  I seem to recall that the
first time the Air Force turned on that system, they detecting a
massive Soviet attack - until someone realized they were scanning the
Moon rising.  True, or urban legend?
  
inkwell.vue.469 : Jon Peddie - The History of Visual Magic in Computers
permalink #17 of 26: Ted Newcomb (tcn) Tue 23 Jul 13 10:46
    
Jon and David are both at Siggraph the next few days and will catch up
with your questions and give us an idea of what's in the future as
soon as they get back :)
  
inkwell.vue.469 : Jon Peddie - The History of Visual Magic in Computers
permalink #18 of 26: a plaid pajama ninja (cynsa) Sat 27 Jul 13 12:58
    
God, that's wonderful if true (the Moon rising.)
  
inkwell.vue.469 : Jon Peddie - The History of Visual Magic in Computers
permalink #19 of 26: jon peddie (jonpeddie) Tue 30 Jul 13 14:00
    
Craig Maudlin - sorry to be so late answering was at Siggraph, and
have just returned to reality....

Mobile devices are definitively one of the futures. I gave a couple of
talks at Siggraph and made the point that this year with the
introduction of (real) Win8 on tablets, and OpenGL ES 3.0 wee are the
tipping point where tablet will move from content consumption devices
to content creation devices. They won't replace real computers, but
will very useful fro story-boarding and brainstorming, and even real
work.

Simultaneously we are moving to multi-mega-pixels displays - i.e., 4K.
You can get a 50-in 4k display now for $1,200, and a really good IGZO
32-inch 4K monitor for $3,600. These prices will come down as they
always do; but the important, good, news is we will be seeing more, and
more better'er images.

We discussed the "unified" vs. specialized GPU/co-processor concpets
at a panel I conducted at Siggraph and the general conclusion was
unified GPU will be with us for the foreseeable future. However,
specialized functions, like the Image Signal Procesor (ISP) that
handles the output of the mobile devices' image sensors will logically
be a specific and standalone processor for a while - especially when
you have mega-pixel monsters like Sony's new 41MPiux 1020 (my next
phone). Also on the back end, specialized processor/accelerators for
things like raytracing will be employed.
  
inkwell.vue.469 : Jon Peddie - The History of Visual Magic in Computers
permalink #20 of 26: jon peddie (jonpeddie) Tue 30 Jul 13 14:03
    
Stoney Tangawizi - same lame apology for being late - overwhelmed at
Siggraph. 

I've heard that story about the moon, and think it's folk lore. The
travel time to/from the moon is long, too long to be a viable threat,
and the signal would be incredibly weak due to beam spread. Also, the
moon doesn't rise over the north pole. But they did have several scary
incidents. One that has been confirmed was a flock of Canadian geese.
I's not known if any were shot down.  :-) 
  
inkwell.vue.469 : Jon Peddie - The History of Visual Magic in Computers
permalink #21 of 26: Craig Maudlin (clm) Tue 30 Jul 13 16:55
    
Thanks for the update, Jon.
  
inkwell.vue.469 : Jon Peddie - The History of Visual Magic in Computers
permalink #22 of 26: Ted Newcomb (tcn) Wed 31 Jul 13 09:44
    
What's your biggest frustration with where the tech is at the moment.
Is development necessarily a straight line or could there be surprises
ahead?
  
inkwell.vue.469 : Jon Peddie - The History of Visual Magic in Computers
permalink #23 of 26: jon peddie (jonpeddie) Wed 31 Jul 13 11:09
    
Ted, and all - here comes a bit of rant - fair warning.

The software developers lag the HW developer by two to three years.
They are lazy about taking advantage of the new HW acceleration
features in CPUs and GPUs and use as an excuse they want to reach the
lowest common denominator so they can get maximum ROI for their
development costs. The net results is the power user gets penalized and
doesn't get his or her money's worth from the HW they bought. In the
gaming industry the ISVs to have what is called a smooth degradation -
the game would senses the available HW and adjust the game's features
and parameters accordingly. The higher quality games (AAA) still do
that. One size doesn't fit all. And yes, it is a zoo out there. If
that's too much of a burden for an ISV, then they should get in another
line of work -  it's the cost of doing business. HW updates at least
once a year, sometimes more often. Software update about every three or
four years. In between they spend out updates that are bug fixes. The
software developers are holding back the growth of the market, and when
you confront them with that they whine about piracy. 

My second rant is the monitor suppliers. We are just now starting to
get high-res monitors. HD IS NOT Hi-res. HD is 15 years old. The add-in
board (AIB) vendors have been selling AIBs that can drive multiple
monitors at 2560 x 1600 for three or more years from a single AIB.
Tablets have "retina" display and sell for $600. A retina like PC
monitor display is $1,200. The cost difference is negligible. The
monitor suppliers complain there isn't enough demand. The users
complain the prices are too high. Hmmmm.

EOR    - end of rant.   
  
  
inkwell.vue.469 : Jon Peddie - The History of Visual Magic in Computers
permalink #24 of 26: John Payne (satyr) Thu 1 Aug 13 18:31
    
I'd like to relate a couple of annecdotes.

Circa 1986 or '87, Tom Mandel (<mandel> here) who was host of the 
Whole Earth discussion group on CompuServe, asked why computer
monitors were so much worse than televisions.

The answer, of course, was that it wasn't the monitors, but the
signals, and that computers were at a disadvantage as compared
with televisions in that they had to generate those signals
rather than simply reconstructing them from a recording.

At roughly the same time I attended a presentation by a Colorado
State University professor (who'd come there from Utah), during
which he showed a ray-traced animation, which had been presented
at SIGGRAPH.  Each frame of that animation required minutes or
hours of CPU time.  I remember commenting at the time that it
would be awhile before we were doing that sort of thing in real
time.
  
inkwell.vue.469 : Jon Peddie - The History of Visual Magic in Computers
permalink #25 of 26: John Payne (satyr) Thu 1 Aug 13 18:44
    
> rather than simply reconstructing them from a recording

From a transmission, of course.  VCRs reconstruct video from recordings.
  

More...



Members: Enter the conference to participate. All posts made in this conference are world-readable.

Subscribe to an RSS 2.0 feed of new responses in this topic RSS feed of new responses

 
   Join Us
 
Home | Learn About | Conferences | Member Pages | Mail | Store | Services & Help | Password | Join Us

Twitter G+ Facebook