Jim Blinn, a Pioneer on the Evolution of Computer Graphics
CG legend Jim Blinn joins the Building the Open Metaverse podcast. He recounts his pioneering work in computer graphics, from bump mapping to the Voyager flybys and reflects on the past and future of CG, VR, AI and research.
Today on Building the Open Metaverse.
The heavens opened up, and my purpose of life became clear. I must remake Charles Kohlhase's movie in color using shaded images and also include a model of the spacecraft.
That movie was shown on the evening news all across the country. Just by luck, being in the right place, I was able to introduce the world at large to computer graphics.
Welcome to Building the Open Metaverse, where technology experts discuss how the community is building the open metaverse together, hosted by Patrick Cozzi and Marc Petit.
Hello everybody, and welcome metaverse builders, dreamers, and pioneers. I'm Marc Petit, and this is my co-host, Patrick Cozzi.
Hey Marc. It's an honor to be here today and every day.
You are listening to Building the Open Metaverse season five, and this podcast, as you know, is your portal into open virtual worlds and spatial computing, bringing you the people and the projects that are on the front lines of building the immersive internet of the future, the open and interoperable metaverse for all.
And today, we have a special guest joining us on that mission.
As you know on this podcast, we like to look back to better understand how we can look forward, and back in the '70s, Jim Blinn devised new methods to represent how objects and light interact in three-dimensional virtual worlds, like environment mapping and bump mapping. He is a true pioneer in computer graphics.
Please welcome Jim Blinn to the show.
Welcome, Jim, it's really a special episode for us because, as many of the things that we know, we've actually learned from your work, and I think it's worthy to take the time to hear about your journey. So, please walk us through your journey and your motivations.
I grew up in a very small town in the middle of Michigan, but I was able to be interested... basically, there are three things that motivated me throughout my career.
One is science, and specifically astronomy. The other one is animation. Now, another one is teaching. In the small town I was in, I was able to learn science and astronomy from books in the library, but also from some TV programs, like there's the Bell Telephone series that was broadcast about that time and educational videos on various things. Also, animation. Of course, I watched a lot of animation in the mornings, Saturday morning. A big fan of Disney and also Walter Lantz.
They both had programs that actually described how the animation process worked. I learned how that worked from listening to them. Finally, there was teaching, for example, the Madden Space Program that Disney produced showed how we were going to explore the solar system. In fact, I knew what they were teaching at the time, but I was fascinated by the idea that you could use animation to teach this stuff. The whole idea of using animation to teach was germinated by watching these programs. Also, the Bell Telephone series had a bunch of science programs, also with animation in it.
A friend of mine and I actually started doing animation using paper cutouts and an eight millimeter movie camera when I was in high school, and that sort of worked. It turned out that we actually were doing educational animation at the time.
A lot of animations, if you look at them, are engineering exercises where the protagonist is trying to solve some problem. He tries one thing, and it doesn't work, he tries something else, and it doesn't work. Wile E. Coyote is trying to find it, or Donald Duck is trying to defeat the Chipmunks or something like that. We got the idea from that.
Ultimately, I went to college at the University of Michigan, Ann Arbor, and I like to say that I got into computers because I took French in high school because I placed out of the foreign language requirement that they had at the university. I had the second term in my freshman year, I had an empty spot in my schedule; flipping through the course catalog, I came to something, computer science, I wonder what that is? Computer programming. I'll sign up for that, see what it is. And I was immediately hooked.
Second term freshman year, I finished that up in first, and second term, sophomore year, I took the advanced programming courses, got into 360 assembly language programming. That fall, there was an announcement that a research project on campus was looking for a programmer who could do 360 assembly language programming in the engineering school.
I applied to that, and they hired me, and they were doing a project of analyzing circuit diagrams. My job at the time was basically to take the descriptions of what the program should do and translate that into assembly language, which I did. But the neat thing that they had was they had a PDP-9/339 with a computer graphics display on it. That wasn't immediately my job, but I kept peering at that and poking at the guy who was in charge of that. His name was Jim Jackson.
Finally, by the end of that summer, I got to be his assistant in writing the program that was an interactive program that would draw the circuitry on the screen, then ship the data off to the main computer to analyze and then display the results. That got me into computer graphics, and Jim Jackson and I have spent a lot of time experimenting with other sorts of graphics sorts of things.
We did some early 3D drawings of 3D rotations and so forth in space. In fact, I did a lot of computer simulations of simple physics things, like intermolecular forces between atoms that had circles on the screen, bouncing around as a 2D atom simulator, or had a program that simulated proton spin in the magnetic field with a 3D vector rotating round.
But the neat thing about that was, well, by the time I graduated in 1970, all the graduate students had finished their PCs, and had left, and so the PDP-9 still stayed around there. That basically became my personal toy for the next four years. I had a personal computer that could do computer graphics, and so I worked that thing to death, doing as many interesting things as I could.
I also did some teaching of courses in computer graphics in Michigan. But ultimately, I found out about the University of Utah project. One of my office mates had a document that was published by Utah showing 3D shaded images, which I thought was so cool. Ultimately I applied to Utah for graduate school, which I got into. And so, in 1974, I started at Utah.
At Utah, when I got there, Jim Kajiya was just finishing building the first E&S frame buffer. They had a picture system that did 3D line drawings and hardware. I did a software simulation of that. This is hardware, much faster, but the 3D stuff on the picture system was early software. But I was able to get that going, kind of like the first user of the system that got that going.
What was interesting about that setup was that you got immediate feedback on what the shaded images were going to look like. The picture people at Utah had been doing 3D shaded images for some time, but they were able to see the results by time exposure on film. They had to get a film developed, and they got a picture of what the single frame looked like. They were able to do some movies, again, repeated exposures on film, but since we had the frame buffer, you could see the results of your image right away, and you could debug it right away.
It's sort of like a next step in interactive batch mode processing, the punch cards. With Shield, with the program, you had to wait overnight to see the results. Then we had interactive terminals. You could see the results in a few minutes. Same now with visuals, with pictures, you could see those right away; that made it so that I could build on a lot of the work that you had done at the time.
Catmull, Warnock, and Martin Newell, who became my thesis advisor, had done various things using the film-based thing. But then I could start out, I started out implementing Catmull's rendering algorithm and Bui Tuong Phong's lighting reflection model, which I actually didn't quite understand, so I designed my own, which worked slightly differently, but still was basically to do highlights on the objects and then started to say, "Well, what else can I do with this? How can I improve on this picture?"
Texture mapping was the thing that, again, Catmull did. I got that going, and I had written a very simple paint program so we could paint the textures and apply them to the teapots. Martin Newell had designed this teapot, so I used that as my sample test object, put textures on it, and so forth.
I saw I wanted to again do math and physics simulations. I was going to draw a picture of a water molecule, but I wanted to make a fuzzy blob-shaped thing for the thing rather than a hard surface with shininess on it. I was looking at it and saying, "How can I make this thing look fuzzy and wrinkly?" And realized that the wrinkles from something like that come from not so much the displacement of the object, but the fact that the surface normal changes slightly from place to place. I was able to work on an algorithm to do that, which I called dermal vector perturbation. A new form of texture mapping.
Now, the trick with that was to make sure that it worked properly because it was a cheat. It would modify the texture, the normal vectors from place to place, slightly, so that you could slow small-scale wrinkles. In order to verify that this was actually good, you had to see if it worked, see if it worked when the objects moved, see whether it animates as they say.
Now again, doing that on film to record an animation would've been a nuisance. But, the picture system, the frame buffer had a mode in it that you could do very simple animations because basically, there was a little microprocessor that was scanning out the memory, synthesizing a video signal, and by changing the numbers and the microprocessor, you can have it zoom in on a part of the image and just show a quarter of the screen or an eighth of the screen, and then you could move that around on the screen.
You could have what amounted to a 16-frame flip book, lower resolution. I did that and made a little 16-frame animation of the above mapping and tried that out, and it didn't look right. So, I played around it a little bit, went to the maths, and realized, "Oh yes, I made an odd number of sin mistakes in my calculations." I fixed that, put that in there, and it looked really cool. A 16-frame low-resolution movie. Then that became bump mapping.
Did you have the feeling that it was groundbreaking, or was it just cool? When did you understand how important those technologies were?
Well, what was interesting at the time was that there were not a lot of people doing this. It was like this private joke among everybody in a sense. We all knew we were doing fun things, but, and I'll get to this later, the equipment was expensive. Not a lot of those things were going around. I never really thought of it as something that a lot of people would be seeing at the time. Mostly it was something that I was just working on my own projects, and I was going to be able to visualize and see, milking the thing as best I could.
Another thing was Martin, my thesis advisor Martin Newell, came in one morning and said, "Hey, I had this idea. What if you, instead of following the light from the light source to the object, if you ran it backward, bounced off the object, and then you could see where the object was reflecting from, which actually was reflecting from in space?" I said, "Ah, great, great idea." Since I had the setup already, I immediately implemented that. I had to figure out the math of it, but I got that going, and that became reflection mapping, which ultimately, Turner Whitted saw the results of, and he was inspired to go off and invent ray tracing from that.
A lot of what I did, one of my great accomplishments, was inspiring other people to take what I did, and we would've had further. Again, while I was at Utah, I was reading an article in the TV Guide about how Carl Sagan was thinking of doing a TV series showing astronomy. I remember thinking at the time, "Boy, it'd be really cool to get involved with that somehow, but I have no idea how that could happen." But anyway, back to Utah and making images.
Turns out that Martin decided to leave Utah and go to work in Xerox Park while I was finishing up my thesis. I had to finish it up really quickly, quicker than I had expected. I spent this one 72-hour stretch madly typing the contents of my thesis into the computer to print it out and get it all out. In the thesis, I had texture mapping, and I had bump mapping, and I had reflection mapping, and I actually worked on an algorithm for drawing curved surfaces from scratch, which was somewhat different than how it's done nowadays, but it was an interesting academic exercise.
Another thing that I did was starting from Phong's and my modification to Phong's model; I thought maybe some people in physics had actually done some studies on how light reflects. Maybe I could get some ideas from them. So, I spent several weeks in the library looking around from one journal to the next. This is research at the time before Google, picking something off the shelf and looking at it. Nothing interesting there.
I stumbled across something called the Journal of the Illumination Engineering Society. "Oh, that sounds promising." I pulled that off and pulled through, and I finally found it in the 1910 edition of that; somebody had actually done some physics experiments where they had measured the light reflection, and they had all these diagrams and say, "Wow, this is experimental data. Cool. Maybe I could just digitize this."
But I kept looking around, and finally, I stumbled across a book from somebody from Minneapolis, University of Minnesota, Theory for Off Spectral Reflection in Light Reflection. I thought, "Ah." Took a look, and it turned out that he had used a very similar model that I had for reflection for light reflection with specularity, but had improved on it and done some other calculations to make it work in more situations. I spent some time simplifying the math so it would sit on a computer. That became another chapter of my thesis, which, ultimately, it turned out later on, the people at Cornell, Rob Cook, were also trying to do things built on what I had done. I had to finish my thesis really fast. I went, "What do I have to do next?"
Well, I'd actually always been interested in JPL, but I wasn't really sure, again, what interest they would have in me because I didn't have a degree in astronomy; it was in computer graphics. So, I called up Ivan Sutherland, who had just started out as a department chair at Caltech. I was aware of him already, certainly for a long time. In fact, he was at Utah. Just before I got there, he had left to go to California.
I called Caltech when I finished at Utah and said, "Is there a place for me in your department?" And he said, "Well, sure." But I said, "To be honest with you, I'm really interested in JPL." JPL was a part of Caltech. "And so, probably what I'll do is come to the department, but go and hang out at JPL to see if there's a project I can get myself involved with somehow." And he said, "Well, interesting thing about that. I've been talking to this guy at JPL, who has just purchased an exact copy of the hardware that you have at Utah, and he's looking for somebody to do something interesting with it." "Oh, well."
So, I went there. I became a postdoc there and helped Ivan teach his graphics class, but also hung out at JPL with Bob Holzman, who was the guy who got the thing. This was just about the time the Voyager spacecraft was launched. I found out that Charlie Kohlhase, the head of mission planning, he had made a simple line drawing animation of what the spacecraft was expected to see as it went by Jupiter and Saturn, a black and white line drawing.
Here I had just arrived with the same equipment I had at Utah and all this experience of doing texture mapping. The heavens opened up, and my purpose of life became clear. I must remake Charles Kohlhase's movie in color using shaded images and also include a model of the spacecraft. That became the first JPL Flyby movie. Which was not actually officially requested by anybody at JPL, but we made it, and once it was there, they decided to include that in the press packet for the event when it happened.
That movie was shown on the evening news all across the country, which was more exposure than most computer graphics had done at the time. Just by luck, being in the right place where this happened, I was able to introduce the world at large to computer graphics through that.
Now, what was interesting about that was all along, I had lucked into this one thing where I got on the computer because I was in the right place at the right time. I got into computer graphics when I was in the right place at the right time, and I got to JPL, and I was there 18 months before the Encounter. I got there just in enough time so that I could gather the data, write all the software which I still had to do, using equipment that I was familiar with, and get the first movie out just in time for the Flyby because this is one of those deadlines that isn't going to go away.
A spacecraft's not going to wait for you to do it. If the movie was done a month later, it would not be interesting anymore. Again, there's all these timing events that happened to line up to make this happen. It was quite amazing.
But you forced your luck here. I mean, you proposed it to them; nobody was asking for it. Is that a piece of advice you would give students, is to really be forthcoming with proposing new things?
Be lucky and be in the right place at the right time. Sure, that works for me. That's the problem with any sort of advice that anybody gives is, whatever works for them, I'll give you a place to do that. There's some advice I could give to students that may or may not be relevant. I'll get to that maybe later on, more general.
But anyway, one of the other interesting aspects of being at JPL was it was right next to Hollywood, and it was a high-tech industry, high-tech location. A lot of Hollywood people came by JPL to get demonstrations on what the new technologies were. We heard that some guy from Paramount was interested in coming by and seeing what we had been doing. Okay, flying in Paramount. He walks in, the guy says, "Hi, my name's Gene Roddenberry, and I'm making this movie Star Trek, and I'm looking for any interesting videos I can use for concrete displays."
Unfortunately, we didn't have anything he could use, but I got to meet Gene Roddenberry, and also George Powell came by, who had done a lot of early animations. It turned out that my boss, Bob Holzman, his next-door neighbor, said, "I hear you're doing some interesting animation things. My father was an animator at Disney, and he might like to come by and see what you're doing." That was Ward Kimball. So, I gave him the demonstration, and he got really excited about it.
It turns out that I knew of Ward Kimball at the time because I knew he was one of the main Disney animators, and he was also interested in model railroading when one of the Disney TV programs talked about model railroading and how Walt was interested in that.
After I gave him a demonstration, he invited me to his house to show off his model trains and his full-scale model railroads. I got to go and hang out with the stuff that I'd actually seen on TV as a kid. We had lunch, and he was saying, "Yeah, when I directed Man In Space, I didn't realize this.” I was thinking to myself, "Oh my God, he directed Man In Space. That's the video that actually got me into doing animation for science visualization." But I didn't know that at the time. You couldn't look up people's things on Google at the time. "Yes, yes, Man In Space, that was great. That got me interested in doing animation." It was another lucky break.
It turns out that Charlie Kohlhase was good buddies with another guy at JPL who happened to be Carl Sagan's business partner in his venture to do TV on the local PBS station.
They hooked us up, and so we got a contract to do a lot of computer graphics for Cosmos, which is how we connected with that. Most of it was black and white line drawings, just diagrammatic things. But there are a few color-shaded image things, which were a lot harder to make, but we were able to get through it. The big one was the DNA replication scene, which was one of the hardest things they did at this time. But I pulled it off just in time to show in Cosmos.
While I was doing this, there was another college in Pasadena called the Art Center College of Design, which is a design school. This was before computers were used in design, but I gave a talk there about what computer graphics could do. I was invited to come back and see if I could generate a course for them. But they didn't have any equipment at the time, but we were able to get the Atari Corporation to donate three of these Atari 800 game computers.
A buddy of mine and I stayed up nights beforehand, hacking together some software in Basic that we could have the students do. It was incredibly crude, but it was able to give people the flavor of it, and it was able to get them leveraged up. We ultimately graduated to some PC clones that were able to do graphics better. We had several years’ worth of courses in arts and college design. Meanwhile, I was picking up little tidbits of how to do graphic design at the time.
One of the funny things was the students realized that they were doing something really new and not as high resolution as they might like, and he compared it to trying to do a screenshot with a chainsaw. But anyway, ultimately, we got that going. Then the next big project was called The Mechanical Universe.
At the time, this was all done on a PDP-11, a 16-bit machine. Everything was basically 64 CAD memory, an eight-bit frame buffer, and limited image quality. I actually didn't expect we would be able to improve it. I planned on actually leaving JPL and going to join the people of Pixar because they had better equipment, but the head of JPL said at the time, "Wow, we don't want to lose you. What do you need?" And I said, "Well, I need a better computer and higher-resolution framework." He bought me a VAX, a full-color frame buffer, and a video recording facility.
The Mechanical Universe became practical. The Mechanical Universe was funded by the Annenberg Project; Charles Annenberg had made a lot of money off of TV Guide. He became the ambassador to Britain at the time of the Reagan administration, I guess. He saw something called The Open University and thought we should do something like that back in the United States.
He flooded a lot of projects to do tele courses on their thing, Caltech's being physics. And so I worked with Professor David Goodstein at Caltech to do The Mechanical Universe, and that became 52 half-hour programs, largely live-action demonstrations, and so forth.
Once I had the machinery going, I was able to do an incredible amount of animation. I was able to use all my physics, undergraduate physics background, and computer graphics, and so forth. There was something like 500 different animation scenes throughout the thing, a total of eight hours of animation, simple diagrams. I had a lot of something that I called algebraic ballet; whenever we had algebra to do, I had the equations dancing around, had the terms for jumping over equal signs and canceling out, and so forth. That has become a standard.
You look at YouTube videos in mathematics nowadays, I'm not sure if they saw my stuff or if it's just the obvious way of doing it, but also then a lot of little similar simulations of thermodynamics, molecules bouncing around and objects spinning to show angular momentum.
We had a really nice series on general spatial relativity, showing space-time diagrams and so forth. This is kind of like one of my favorite things that I did, to use my knowledge and possible things in physics and math to do that.
I saw The Mechanical Universe project, it's on YouTube, and it's actually fascinating to watch. Project Math, I showed my kid, my 12-year-old kid as well. I mean, for the time, it was groundbreaking.
How do you explain that we do not have, with today's technology and the much-improved computer graphics, that there is not a set of reference videos to explain the basics of physics or thermodynamics?
You look on YouTube, and there are dozens of fabulous videos on mathematics and physics. I don't have time to watch them all, but I'm fascinated by them. There's lots of stuff, and there is something I've seen advertised, which I haven't looked into, of interactive online coursework, something called Brilliant, which looks pretty cool. Interactive math and physics videos and so forth.
Whether I inspired that personally or not, I don't know. I can hope I did. But when the tools are available, people will pop up to use them, and mathematicians all over the world are generating these things; that's what I spend a lot of my free time looking at, math videos on YouTube.
It's interesting because the technology, there's quite a wide range of technology. A lot of them have very nice sophisticated animation, but a lot of them are people basically aiming a camera at a piece of paper and writing with a pen on it. Some sound like they're from India or something like that, but they might not have that fancy technology, but they're still, to me anyway, fascinating lessons to look at. YouTube makes it much easier to scroll back and forth on things and make it slow and fast and see it at the right time. So, YouTube is a great way of seeing that.
That's my experience in my career. I've dealt with a lot of other writing in math and so forth and still doing some mathematical tinkering, but that's a precis of what I did. You're interested, in this podcast, in the metaverse, and my experience in that directly has been indirect.
I've seen a lot of demos at SIGGRAPH you put on... strapping on the iPhones and looking around, and this thing's cool, which is interesting. It's one of those things that if you look at the development of technology, you start out with black-and-white movies, and they were cool. Then coming along with this new technology called color, wow, everybody jumped on the bandwagon. Color is cool. So, color movies went nuts with that. And everybody's in color now except a few specialties.
Then it came out, 3D movies, 3Ds were cool, but then they weren't cool again. For some reason, 3D didn't... every 20 or 30 years, 3D has a comeback, and then it fades away. I love 3D movies myself; I go to them as often as I can, but for some reason putting on the glasses doesn't seem to work for a lot of people, it works for me.
You wonder if the metaverse going to be something like that. People have been doing the metaverse. If any of you have been to the SIGGRAPH conference this last year, the 50th anniversary of SIGGRAPH, it was just a fabulous event, and the incredible Bonnie Mitchell put together this fabulous historical retrospective of hardware and software and whatnot.
But if you look, there was an entire room full of 3D display goggles, dozens of them, had been doing it since Ivan Sutherland did it in 1960 something or rather at Harvard, I guess. The Metaverse has this kind of barrier to jump over, and I'm not sure what that will take, but various companies are making these goggles and making them cheaper and faster and more resolution. The question is, is this giving you an experience that's better enough than just looking at a flat screen or looking at YouTube to make it worth the effort? I hope it does, and I'm sure there'll be a bunch of people who have done it. Watched a lot of TV shows about this now, a program called Upload. I don't know if you've seen that. And Ready Player One, and so forth. It's become a staple of what the future of computer visuals is going to be like.
Well, VR is certainly a part of the metaverse. It's the whole internet and embracing real-time 3D and interactivity, which is more the commonly accepted definition of the metaverse; immersivity is one aspect of it, but I would say interactivity and the fact that everything can become social and multi-user.
It's interesting watching the development of that and what catches on, what doesn't. There's been things that are just looking at a TV screen on the computer screen with group chat and group things that have been coming and going too. That's something I don't have a lot of experience in, but it'll be interesting to see what becomes of it.
One of the things that I realized is that I'm absolutely terrible at making predictions about the future. I learned when I was at Michigan… I had this computer that I inherited through everybody else graduating and leaving, but I never expected it to be upgraded.
My mindset was: let's milk this with all it's worth. Let's expect that we're not going to get a new graphics display. I'm just going to play with this as best I can.
Likewise, when I was at JPL for four or five years, we used the same equipment, and I never really thought about getting a better thing until finally, in the end, Bruce Murray, the head of JPL, bought me a VAX. And so my viewpoint on this thing is more, what are the best images we can make with the hardware we've got?
I know that's different from Catmull and Pixar's thing where they were saying, "Here's what we want to make, and we're going to wait for the technology to mature to the point where we can now get that." My viewpoint, what I was doing was different, was like, "Here's what we've got now. Let's see what we can do with it."
I want to disagree with you on your ability to predict the future because I came back to your 1998 SIGGRAPH keynote for the 25th anniversary. We can review that. Let's review your scorecard live.
You predicted that CG would replace Hollywood backlogs. Well, that's done. I mean, it's visual effects. The timing, I would give you the timing; we'll come to that in a minute. You said theme parks will become virtual. Well, it's happening. It's not fully there. And this was 1998.
I mean, almost nobody had cell phones in 1998. I mean, it's easy to look back. It was incredibly prescient to make those predictions. You also said that SIGGRAPH 2003 would be held in virtual reality. That's one.
No, I was off by 20 years. I think since 2020 or 2021, was virtual reality. There's this phenomenon itself, predictions where you're basically over-optimistic in the short term and pessimistic in the long term.
When I was at Michigan, I remember seeing something Alan Tay did. He predicted, this is back in 1968, that someday you're going to be able to carry a computer around the size of a notebook that has all the power of a PDP-10. Then we were thinking, "This guy is nuts."
In 1998 the research people knew about the internet, but the public barely knew about the internet. You said, "Anything not on the web will be forgotten." This was a crazy prediction. It turned out to be true. You got three out of four, and one you got off by 20 years, which is ...
But yeah, I must admit that some of those predictions were intended as jokes.
I think it was amazing, and I encourage people to go back to your 1998 keynote. I think it was a lot of fun, and it helps measure in 25 years, but you also did the 50th-anniversary keynote.
The transcript of it was online on the SIGGRAPH site, it's disappearing, but it's on archive.org or something. I've got a link to it somewhere here if anybody's interested in it. It was a good talk. I hope it gets back. I've actually got a video of it, but the cassette's damaged, and I'm not sure if I'll be able to digitize it or not.
But you can find the text on the slides. It's super easy.
Now that we've established that let's talk about AI; what do you think? How do you approach that craziness around generative AI and AI for creators?
Well, I could either say we're completely doomed, and we're going to be answering to our robot overlords any day now. Hard to say. It sounds impressive at first, but then you dig into it, and a lot of what the AIs tell you is junk, but people who are doing images with AI, basically can tell it what you want a picture of. My brother has been doing a lot of that. He's been working on some books on languages and illustrating them with AI-generated images. And they're all stunning.
But with something like that, if the picture's not what you want, you could say, "Well, make it a little greener, a little bit different thing," and so forth, you get it.
A lot of the AI textual things are, in the first place, remarkable. I saw something that a friend of mine did, Andrew Glassner, who's done some research in AI and quantum computing.
He said, "Write me a description of what a Hadamard gate does in the form of a Shakespearean sonnet." It was just astonishing. It was a Shakespearean sonnet that described superposition and so forth. It's like, that's all you told it? And he said, "Yeah, that's all I've told it, and it came up with Shakespearean sonnet."
But then there are also things where people ask it, "Write me some code, some Python code to do this and that." So, they write the Python code, but then the Python codes ask to include a whole bunch of libraries that don't exist.
What's surprising to me about AI is the research is mostly driven by corporations.
You've been at for many, many years at Microsoft Research, with a huge range of graphics luminaries, but everything you did in the early days was at universities.
Do you think this is a trend, that universities are not the place where research is happening anymore?
I am much more of an academic at heart than an industrialist. I hope universities continue to make their mark.
I think if you look at a lot of the SIGGRAPH papers, a lot of it is still from universities, but there are some corporations like Pixar that intentionally publish everything they do to improve the field, whereas other industries maybe don't so much, which I think is shortsighted. But industries are interested in patenting things.
Jim, you mentioned a lot of your early work were things that you just thought were cool, that you wanted to work on, and then your research resulted in direct impact and techniques that we all use in the field today, but also then went on and inspired so many other people who then created these very influential techniques.
I mean, would you have advice for researchers today in order to try to create impact?
Researchers today have a different situation than mine. Mine was, there were very few people doing it. Anything you did was new. Nowadays, it's hard to find something that hasn't been done already.
Researchers have to spend as much time looking through the literature to see if what their great idea is has already been done. A lot of it's going to be incremental. There are some things that I wish I had done differently, which may or may not apply to people right now.
I wish I had documented my stuff more. There are a lot of the only interesting programs I wrote that I wish I had taken pictures of or videoed, and the documents I knew I wish I'd put dates on things. I don't remember exactly when I did this or don't remember exactly when I did that.
Nowadays, that advice might not be as relevant because anything you do, you save a picture to a file, and there's a creation date on it. Documenting something like that is just screen-captured. And beforehand, documenting a real-time thing that you did without getting a film camera running at the screen.
There was a really neat film of myself demonstrating the 3D circuit drawing program that I worked on at Michigan after all the graduate students left and still wanted to have a demonstration of it to show at conferences. They actually brought in the film camera and filmed it, demonstrating how this thing worked, drawing a diagram. Nobody knows where that film is now. It's disappeared. It's kind of a shame. But nowadays, you document something, you save it on your 60-terabyte drive, and you can keep copies of it all. But just being aware of documenting your stuff and keeping track of it.
But generally, the things that I did both at Utah and later things is I would make a picture. I'd look at the screen and say, "What don't I like about this picture?" Well, things are too smooth. I like them to look more wrinkly. See who can figure out how to do that. Or, from a design point of view, when I was doing some animations for The Mechanical Universe, I'd put that up on the screen and think, “What's important about this that I want people to see?”
One of the things about animation is directing people's attention to what part of the screen you want to pay attention to. I do some things like put a picture up on the screen and look away. I look at the screen, close my eyes, and say, "What's the first thing I saw?" Is that the important thing or not?
For example, one thing where I had some equations on the screen, and then I had a nice multicolored background behind it that looked really pretty, like a little sunset. I looked at that and said, "What did I see? I saw a sunset. Equation? I didn't see the equation." So, I put a more boring background on the screen. The equation became the more obvious thing to see. The basic principle is making the foreground more interesting than the background in a sense.
Those are things that I did to try to improve the images that I made; a lot of what I did was incremental improvement. The first Flyby pictures that I made were incredibly terrible looking, but I just got something on the screen saying, "What do I hate about this?" Well, the spacecraft has to have enough detail. Let's put more detail into the spacecraft. Now, what I don't like with this? Well, the map on Jupiter doesn't look right. Incremental improvements, and what's the next incremental improvement I can make to this image until either the deadline comes or I can't think of anything else I want to do.
How many SIGGRAPHs did you attend?
Probably a neighborhood of 40 of them.
I went to all of them up till about 2009 and then a little sporadically since then since I have other demands on my time these days. One of the things about SIGGRAPH that's fun is the playful nature of it and the whole thing about ribbons.
For people who haven't seen the joke, basically, you get a ribbon showing whether you're a speaker or a member of the committee and so forth. People started making fake ribbons with jokes on them and tacking them on and handing them out, which I bought into, and I still have them all.
I would donate those to Bonnie Mitchell to put in the SIGGRAPH archives.
SIGGRAPH has been so influential to this field and to so many of us.
Moving forward, do you have ideas or ways that you think SIGGRAPH would continue to contribute in a big way?
Well, SIGGRAPH has branched out a lot, so there's lots of different things that it does, and that's one thing that's good and what's necessary for it to do, I think.
There are multiple sessions going on of art shows versus technical things, and film shows, and equipment exhibits, and so forth. In that regard, I think it's going in a good direction. Although it becomes overwhelming in terms of seeing what it is you're going to see. A lot of it's recorded; you can play it back later.
I watched a couple of the sessions that I missed, and I have recordings, but I'm glad there are so many people with the energy to do this sort of thing.
It's not something I'd be good at myself, but I'm forever grateful to all the people on the SIGGRAPH committee for putting on this incredible show and fun experience for all of us that I can participate in.
Well, that wraps up our fascinating talk with computer graphics legend Jim Blinn. We covered highlights of your career from the early days of SIGGRAPHs through the future of CG.
Jim, you have an incredible ability to make complex topics understandable and entertaining. We love your humor and storytelling talent. Even though you downplayed it, you play the key role you making CG what it is today. As CG becomes real-time and more lifelike, and as it invades the internet to form the metaverse, we should not forget pioneers like you who turned an academic curiosity into an industry and an art form, and your creativity and persistence paved the way.
A huge thank you to you, Jim Blinn, for being with us today.
No, thank you very much for all those kind comments.
Thank you as well to our ever-growing audience.
You can find us on our LinkedIn page, of course, on major podcast platforms and on YouTube, as well as on our own buildingtheopenmetaverse.org website.
Thanks to you all. Thank you, Patrick, again, and thank you, Jim. We'll be back soon for another episode of Building the Open Metaverse.