Pearl will have a big presence at SIGGRAPH this year! We’re doing our making-of presentation in a Production Session on Sunday, July 24th from 10:45-12:15, and showing it on the Vive in the VR Village all day from Sunday through Thursday. Pearl will also be shown at the Appy Hour event on Wednesday, July 27th from 5-7 pm.
I’ll only be there Sunday-Tuesday, but I’m sure looking forward to it!
This just in: next Wednesday, June 1, we’ll be screening “Pearl” in 2D, 360º and VR at an SF-SIGGRAPH event in San Francisco. We’ll also be doing a talk with some behind-the-scenes footage. Seating is limited, so if you’re in the Bay Area and want to attend, sign up now!
Okay, one more update for those of you in New York: we will also be doing the first ever live public demo of Patrick Osborne’s Spotlight Story “Pearl” in full 360° at Tribeca’s Interactive Playground on Saturday, April 16th. We’ll be there all day.
Here’s where you can find tickets to the event. Hope to see you there!
News is finally starting to come out about the project I’ve been working on for the past year at Google Spotlight Stories. It’s an interactive 360º story, directed by Patrick Osborne, called “Pearl”. We’re simultaneously making a 2D film version of the story, which will have its world premiere at the Tribeca Film Festival on Sunday, April 17th at 6pm.
It’s been an amazing experience so far, full of exciting artistic and technical challenges, and it’s opened my mind to some astonishing new things. I’ll post more about it when we’re done, but in the meantime, Cartoon Brew has a nice writeup with some images from the show. And if you’re in New York that weekend, swing by and say hi!
A few days ago the image above started going around the social networks, attributed to “a friend working on AI”. Apparently a deliberate leak, now we know where it came from: a research group at Google working on neural networks for image recognition. Their research is worth reading about, and the images are fascinating: Inceptionism.
I have to admit, machine learning makes me jealous. Why does the machine get to do the learning? Why can’t I do the learning? But projects like this make me feel a little better. When the black box opens and we see the writhing mass inside, we get to learn how the machine does the learning. Everyone wins.
And the machines still have some catching up to do. As soon as I saw the amazing gallery of inceptionist images, I recognized the “inceptionist style” from the mysterious grey squirrel. Could a neural network do that?
Here is my first short film. I made this at PDI in 1995, during a gap between commercials. I modeled and rigged the characters, did most of the animation, and developed the wobbly ink-line look.
The pigeons’ torso was a metaball surface driven by a series of spheres along a spline between the head and the body, which were both separate IK joints (so I could easily get that pigeon-head movement style without counteranimating.) The eyes, beak, legs and wings were separate objects, each of which got rendered in its own separate pass. Each layer had its vector outline traced (using a tool originally written for scanning corporate photostats for flying logos!) I processed the curves using a procedural scripting language to give them some physics and personality, and then rendered them as black ink lines with varying thickness (using a tool written by Drew Olbrich). Finally, I ran the rendered lines through some image processing filters to get the edge darkening effect, and did some iterated stochastic silhouette dilation to add random ink blotches where the lines were thickest. Simple, really! ;-)
Cassidy Curtis's splendid display of colorful things.