This past January I had the incredible good fortune to fall sideways into a wonderful graphics research project. How it came about is pure serendipity: I had coffee with my advisor from UW, who’d recently started at Google. He asked if I’d like to test out some experimental sketch-based animation software one of his summer interns was developing. I said sure, thinking I might spend an hour or two… but the software was so much fun to use, I couldn’t stop playing with it. One thing led to another, and now we have a paper in SIGGRAPH Asia 2020!
Have you ever wished you could just jot down a 3D character and animate it super quick, without all that tedious modeling, rigging and keyframing? That’s what Monster Mash is for: casual 3D animation. Here’s how it works:
Next week I’m heading to Europe for a couple of conferences: FMX (Stuttgart) where Jan Pinkava will be giving twotalks about our work at Spotlight Stories, and Expressive (Genoa) where I’m presenting a paper about our look development work for Age of Sail:Non-Photorealistic Animation for Immersive Storytelling. I’m beyond excited to meet up with old colleagues and new ones, learn about the latest graphics techniques, mangle two foreign languages, and explore some cities I’ve never been to before. If you’ll be at either of these events, let me know!
One of the highlights of my time at DreamWorks was getting to help an amazing team of animators design their next-generation animation software, Premo. The software we’d been using up to that point, Emo, was originally written at PDI in the 1980’s. While a lot of talented engineers had improved it over the years, there was only so much they could build on top of a foundation so old that it predated the GPU! We often dreamed about what the ideal animation tool would be like, if we could somehow start again from scratch.
Incredibly, in 2008, we got the opportunity to do exactly that, and the idea for Premo was born. Rex Grignon led the design effort, and brought on Jason Schleifer, Fred Nilsson, Jason Reisig, Simon Otto, Dave Torres and myself to flesh out the zillions of tiny details that matter so much. We worked closely with engineers Bruce Wilson, Seath Ahrens, Morgwn Mccarty, Brendan Duncan and many others (see this article for the full list!) as they brought their own expertise to bear on all those tiny details, and turned our fluffy wishful ideas into real working code. After years of development, the animators on How to Train Your Dragon 2 got to take Premo for its inaugural flight, and were spoiled forever by the best software any of us had ever seen. We knew we had something very special on our hands, but I wondered: would anyone outside the company ever know about it?
Now, a decade later, the Academy of Motion Picture Arts and Sciences has honored Premo with a Sci-Tech Award!
Here’s a video of the awards ceremony, with Sir Patrick Stewart introducing some of the team:
And here’s a terrific blog post from Nimble Collective (the company that Rex, Jason and Bruce went on to co-found after DreamWorks) about FIFICRD, our shamelessly awkward acronym for our fiercely held beliefs about how great software can and should be:
Okay, one more update for those of you in New York: we will also be doing the first ever live public demo of Patrick Osborne’s Spotlight Story “Pearl” in full 360° at Tribeca’s Interactive Playground on Saturday, April 16th. We’ll be there all day.
Here’s where you can find tickets to the event. Hope to see you there!
A few days ago the image above started going around the social networks, attributed to “a friend working on AI”. Apparently a deliberate leak, now we know where it came from: a research group at Google working on neural networks for image recognition. Their research is worth reading about, and the images are fascinating: Inceptionism.
I have to admit, machine learning makes me jealous. Why does the machine get to do the learning? Why can’t I do the learning? But projects like this make me feel a little better. When the black box opens and we see the writhing mass inside, we get to learn how the machine does the learning. Everyone wins.
And the machines still have some catching up to do. As soon as I saw the amazing gallery of inceptionist images, I recognized the “inceptionist style” from the mysterious grey squirrel. Could a neural network do that?
Here’s something different: I have a new job! Today was my first day at Google Spotlight Stories. I’ll be working with some amazing filmmakers and technologists who are busy inventing a new kind of narrative visual storytelling uniquely suited to handheld mobile devices. If that sounds crazy, that’s because it is. It’s my kind of crazy. It’s exactly the kind of wild, inventive, “let’s try this and see what happens” attitude that got me interested in computer graphics in the first place, all those years ago. I couldn’t be more excited.
The video above really does a great job of capturing the delight of experiencing one of these stories for the first time. It’s almost impossible not to grin like a ninny. There’s not much more I can say about it right now, but there’s been some terrificpress about the projects they’ve created so far. I’ll share more when I can!
Inspired by this brilliant interactive demo of the perceptually uniform CIE L*a*b* color space, I decided to try a L*a*b* version of my pseudocolor scheme. I don’t find this version as pretty to look at, but it has the advantage that higher values are always mapped to colors that are perceptually brighter than lower values. In other words, if you squint at the image above, the bright and dark regions correspond pretty much exactly to what you’d see if it were greyscale. (For the L*a*b* to RGB conversion, I grabbed pseudocode from this handy page.)
Cassidy Curtis's splendid display of colorful things.