What a great eclipse! We made makeshift safety goggles out of the Mylar I’d bought, some cardboard, and whatever was kicking around the house. The goggles worked like a charm. For shadow projections, a simple kitchen colander turned out to be the perfect device.
Reminder for next eclipse: must shoot some timelapse of those crazy shadows.
Getting ready for the solar eclipse… I bought some 0.2mm silver mylar, which is practically opaque, but lets a tiny bit of sunlight through (like, about 1/160,000th by my seat-of-the-pants estimate). One layer of it makes the sun seem about as bright as the moon. Two layers, and you can’t even see the sun at all.
THIS MAY NOT ACTUALLY BE SAFE AT ALL. I don’t honestly know, because I have no way of knowing how much UV light gets through this stuff. But I did use mylar to look at the last solar eclipse several years ago, and I haven’t gone blind yet, so there’s anecdotal evidence at least.
Here’s a good article explaining when and where you can see the eclipse. If you’re in the San Francisco Bay Area, it should start on Sunday afternoon (May 20th) around 5:30pm, and reach maximum occlusion (about 90%) by 6:30pm.
Drew’s Fourth Dimension app has such a brilliant logo, I just had to have it on a T-shirt. Now you can have one too! We’re picky about our T-shirts, so we made this one exactly what we’d want to wear ourselves. It’s printed in a four-color process with a soft white ink underlayer, which gives a good balance of image quality and detail, while still looking and feeling good on a black shirt. There’s no text or other clutter, just the 4D glasses image, which speaks for itself. Enjoy!
Certified math nut Sondra Eklund has made an awesome sweater. Before you read her explanation, take a look at it and see if you can figure out how it works.
Of course, the synesthete in me says her color choices are all wrong. But she’s coloring a different animal here (primes, not numerals) so there’s no real way to be “right”. Regardless, I want this as a T-shirt!
Sometimes, when you’ve worked on one shot for too long, you can go a bit blind. It’s a very specific kind of blindness, one that prevents you from seeing mistakes you’ve made and opportunities you may have missed. It seems to happen to every animator at some point, and it is deadly to the creative process.
There are tricks you can use to get around it. You can hold a mirror up to the screen to see the shot from a fresh point of view. You can step away for a few minutes, or a few days, or longer. (One time I was able to step away from a shot for a full year: boy did I see it with fresh eyes at that point!) And of course you can show other people. But once the effect has set in, your own perceptive powers are severely diminished.
I have a hunch that the main cause of this is the simple act of watching your shot over and over again. In psychology there’s a concept called habituation: any stimulus repeated long enough reduces your sensitivity to that stimulus. It’s the reason why you notice the sound of the dishwasher when you first walk into the kitchen, but eventually it fades into the background.
In the early days of animation, watching a shot in progress play back at full speed was a luxury animators didn’t have. To do that required shooting key drawings onto film, developing the film, threading it into a projector, and so on: an expensive and time consuming process. And yet, great animation still got made: animators planned very carefully and learned to do most of the work in their heads. In the early days of computer animation, it was much the same, though for different reasons: we’d have to wait for an overnight render to see our work at speed.
Nowadays, animators have digital tools that allow for instant, real-time feedback, which for the most part is a tremendous aid. But it also makes it very easy to hypnotize yourself with all those looping stimuli.
If you want to stay sharp, it’s critical that you delay the onset of animation hypnosis as long as you possibly can. So what I try to do is avoid watching my shot while I’m working on it. I’ll watch it once, twice, maybe three times, and then jot down my thoughts about what needs doing. Then I go to work, keeping narrowly focused on each detail as I go. If I have to play some part of the shot at speed to judge some nuance of weight or gesture, I’ll hide everything but the body part I’m working on, so as not to get distracted. Once I’ve addressed all of my notes, only then will I watch the shot as a whole again. It takes a kind of discipline that I can’t always muster. But when I succeed, it feels great. And as a side benefit, I find I get more real work done in less time: after all, time spent looking at your shot is not time spent working on it.
I’d love to have some way of counting how many times I’ve looped my shot, to see if there’s a certain magic number where hypnosis sets in. What would that number be? 500? 5,000? You could make a game of it, a la “Name That Tune”: challenge yourself to finish a shot with the absolute minimum of viewings. How low could you go? Ten loops? Three loops? Zero?
My friends at Stamen keep doing wonderful new things to maps. Today’s innovation: a globe’s worth of map tiles rendered automatically in the style of a watercolor painting.
This is the kind of project I could only dream of, back when I was doing this kind of research. So much has happened since then! Stamen’s tiles combine scanned watercolor textures with vector map data from Open Street Maps, and various image-based filters for effects like edge darkening. The results are organic, seamless and beautiful. Even though I know the textures will repeat themselves eventually, I can’t help scrolling out into the ocean, just to see what’s there.
Update: Zach has posted a really nice step-by-step breakdown of how they did this. (Gaussian blurs and Perlin noise are your friends here.) I particularly love the way they use small multiples and animated gifs to explore the parameter space:
Evan Shinners is a classical pianist who plays Bach with a style best described as “balls out”. He’s also a synaesthete. In this video, he’s visualized his colored hearing for the rest of us, and the results are stunning! (via Boing Boing.)
If you look closely at my friend Drew Olbrich, you might notice something strange about him. He doesn’t like to talk about it, but: the part of Drew you can see is just an infinitesimal slice of a much higher-dimensional being.
To give the rest of us a sense of what it’s like to be him, Drew’s written a nifty little app (for your iPhone or iPad) called The Fourth Dimension. Give it a spin, and see if it doesn’t thicken your mind up just a little bit.
Following up on yesterday’s post about higher frame rates in movies, there’s another question looming. If high frame rates catch on industry-wide, what will it mean for animators?
We won’t really know for sure until the movies start coming out. But we can guess. There are televisions on the market that will play movies at 120fps regardless of how the movie was shot. They do this by creating the inbetween frames automatically, in real time. (How exactly it’s done, I’m not sure, but it’s probably some sort of optical flow technique, like Twixtor.) When you see this done to a live action movie shot at 24fps, the effect is impressive: movement really does feel incredibly smooth, and the strobing/juddering problem is minimal. But if you watch an animated movie on one of these TVs, the results are not good. Timing that felt snappy at 24fps feels mushy at 120. Eyes look bizarre during blinks. And don’t get me started on smear frames.
Of course, this is just a machine trying its best to interpolate frames according to some fixed set of rules. Animators will be able to make more intelligent choices, which of course it’s our job to do. But that’s where it gets interesting. How many frames should a blink take at 120fps? What’s the upper limit on how snappy a move can be, if you can potentially get from one pose to another in a mere 8 milliseconds? It could open up new creative possibilities too. Take staggers for example: at 24fps, if you want something to vibrate really quickly, your only option is to do it “on ones” (that is, alternating single frames). But at 120fps, you could potentially have staggers vibrating on anything from ones to fives. How will those different speeds feel to the audience?
One thing seems pretty certain: animating at 120fps would be a lot more work. For animators who agonize over every frame, it will mean five times the agony. It will certainly mean more reliance on computer assistance: more spline interpolation, fewer hand-crafted inbetweens, and forget about hand-drawing every frame! I look forward to hearing animators’ stories from the trenches on Hobbit. Will they find 48fps twice as hard, or more, or less? What tricks will they have to invent to make their job manageable?
There’s been some discussion brewing among certain filmmakers about the impact of making movies that play faster than the current standard of 24 frames per second. Peter Jackson is shooting The Hobbit at 48fps, and others are reportedly experimenting with rates like 60 or even 120.
Mixed into the discussion are some really deep misconceptions about how vision and perception actually work. People are saying things like “the human eye perceives at 60fps”. This is simply not true. You can’t quantify the “frame rate” of the human eye, because perception doesn’t work that way. It’s not discrete, it’s continuous. There is, literally, no frame rate that is fast enough to match our experience of reality. All we can do, in frame-by-frame media, is to try to get closer.
The problem is that our eyes, unlike cameras, don’t stay put. They’re active, not passive. They move around in direct response to what they are seeing. If you watch an object moving past you, your eyes will rotate smoothly to match the speed of the thing you’re looking at. If what you’re looking at is real, you will perceive a sharp, clear image of that thing. But if it’s a movie made of a series of discrete frames, you will perceive a stuttering, ghosted mess. This is because, while your eyes move smoothly, each frame of what you’re watching is standing still, leaving a blurry streak across your retina as your eyes move past it, which is then replaced by another blurry streak in a slightly different spot, and so on. This vibrating effect is known as “strobing” or “judder”.
Applying camera-based effects like motion blur only makes the mess look worse. Now, your stuttering ghosted multiple image becomes a stuttering, ghosted blurry multiple image. (The emphasis on motion blur is particularly bad in VFX-heavy action movies, which is why I try to sit near the back.)
Filmmakers tend to work around this problem by using the camera itself as a surrogate for our wandering eye: tracking what’s important so that it effectively stays put (and therefore sharp) in screen space. But you can’t track everything at once, and a movie where nothing ever moves would be very dull indeed.
I am pretty sure there is no frame rate fast enough to completely solve this problem. However, the faster the frame rate, the less blurring and strobing you’ll experience as your eyes track moving objects across the screen. So I am extremely curious to see what Jackson’s Hobbit looks like at 48fps.
There’s a second problem here, which is cultural. My entire generation was raised on high quality feature films at 24 frames, and poorer-quality television (soap operas, local news) at 60 fields per second. As a result, we tend to associate the slower frame rate with quality. Commercial and TV directors caught on to this decades ago, and started shooting at 24fps to try to get the “film look”. How will we perceive a movie that’s shot at 48fps? Will it still feel “cheap” to us? And what about the next generation, raised on video games that play at much higher frame rates? What cultural baggage will they bring into the theater with them?
Cassidy Curtis's splendid display of colorful things.