News is finally starting to come out about the project I’ve been working on for the past year at Google Spotlight Stories. It’s an interactive 360º story, directed by Patrick Osborne, called “Pearl”. We’re simultaneously making a 2D film version of the story, which will have its world premiere at the Tribeca Film Festival on Sunday, April 17th at 6pm.
It’s been an amazing experience so far, full of exciting artistic and technical challenges, and it’s opened my mind to some astonishing new things. I’ll post more about it when we’re done, but in the meantime, Cartoon Brew has a nice writeup with some images from the show. And if you’re in New York that weekend, swing by and say hi!
A few days ago the image above started going around the social networks, attributed to “a friend working on AI”. Apparently a deliberate leak, now we know where it came from: a research group at Google working on neural networks for image recognition. Their research is worth reading about, and the images are fascinating: Inceptionism.
I have to admit, machine learning makes me jealous. Why does the machine get to do the learning? Why can’t I do the learning? But projects like this make me feel a little better. When the black box opens and we see the writhing mass inside, we get to learn how the machine does the learning. Everyone wins.
And the machines still have some catching up to do. As soon as I saw the amazing gallery of inceptionist images, I recognized the “inceptionist style” from the mysterious grey squirrel. Could a neural network do that?
Here is my first short film. I made this at PDI in 1995, during a gap between commercials. I modeled and rigged the characters, did most of the animation, and developed the wobbly ink-line look.
The pigeons’ torso was a metaball surface driven by a series of spheres along a spline between the head and the body, which were both separate IK joints (so I could easily get that pigeon-head movement style without counteranimating.) The eyes, beak, legs and wings were separate objects, each of which got rendered in its own separate pass. Each layer had its vector outline traced (using a tool originally written for scanning corporate photostats for flying logos!) I processed the curves using a procedural scripting language to give them some physics and personality, and then rendered them as black ink lines with varying thickness (using a tool written by Drew Olbrich). Finally, I ran the rendered lines through some image processing filters to get the edge darkening effect, and did some iterated stochastic silhouette dilation to add random ink blotches where the lines were thickest. Simple, really! ;-)
Why do coffee stains always have a dark ring around the edge? It’s because the water’s surface is curved: it evaporates more quickly near the edges, causing it to flow outward from the middle, carrying coffee particles with it. We cited some early research demonstrating this effect in our 1997 watercolor paper, but now there’s video that actually shows the process happening at a microscopic scale. (Thanks Eric for the link!)
My friends Dan Wexler and Gilles Dezeustre have just released a new app for your iPhone/iPad. It’s called Glaze. It’s a painterly rendering filter for your photos, and it’s really cool. It’s based on the traditional brushstroke-based model pioneered by folks like Paul Haeberli, Pete Litwinowicz and Aaron Hertzmann, but it adds some neat new twists: face detection to guarantee that eyes and other important features come out with the right amount of detail; a genetic algorithm for mutating and doing artificial selection on painting styles; and a really slick iOS interface that makes all of the above completely effortless and transparent. It also runs blazingly fast, considering all the work that must be going on under the hood. They’ll be giving a talk about the details at SIGGRAPH next week. (Here’s an abstract of their talk… wish I could go!)
What I find the app does best, so far, is to turn my garbage photos into beautiful art. This, for example, is a picture my thumb took by accident as I was putting away my phone. The original photo was blurry, out of focus, and weirdly composed. But the painting’s handmade feeling makes your eye linger on the details, and the results are just lovely. (Everyone’s Instagram is about to get a lot prettier!)
My friends at Stamen keep doing wonderful new things to maps. Today’s innovation: a globe’s worth of map tiles rendered automatically in the style of a watercolor painting.
This is the kind of project I could only dream of, back when I was doing this kind of research. So much has happened since then! Stamen’s tiles combine scanned watercolor textures with vector map data from Open Street Maps, and various image-based filters for effects like edge darkening. The results are organic, seamless and beautiful. Even though I know the textures will repeat themselves eventually, I can’t help scrolling out into the ocean, just to see what’s there.
Update: Zach has posted a really nice step-by-step breakdown of how they did this. (Gaussian blurs and Perlin noise are your friends here.) I particularly love the way they use small multiples and animated gifs to explore the parameter space:
Check out these screenshots of this new game being created by one lone developer, Eskil Steenberg. The idea of an auteur-driven videogame is already pretty inspiring, but the painterly visuals here are especially exciting to a guy like me. I’m not a gamer myself (RSI and gaming don’t mix) but I get a lot of joy out of watching over people’s shoulders. I can’t wait to see how this plays in motion. I’m sure there will be shower-door effects and crawling texture artifacts to contend with, but if he’s clever, he’ll find a way to rise above all that. And this guy seems to be nothing if not very, very clever. Can’t wait!
What do you call a pocket full of chisel-tip markers? Calligraphic Packing! It’s also the name of a computer graphics research project from the University of Waterloo. My friend Craig Kaplan, a professor there, is a pioneer of “computational calligraphy”, a brand new research area that’s about to grow in some very interesting directions. Craig’s been interested in graffiti for a long time, for a lot of the same reasons I am. We each have our own ways of studying it, and his way is to take it apart, learn what makes it work, and write software that embodies that understanding. This project, led by Craig’s student Jie Xu, is the first step in what I hope will be a long and fruitful quest, as Craig puts it, “to probe the nature of letterforms and legibility”.
Cassidy Curtis's splendid display of colorful things.