Pearl has also been nominated in three categories (Narrative, Mobile, and Original Score) for the Virtual Reality Foundation’s third annual Proto Awards, coming up on October 8th. The nominees all look amazing. I can’t wait to meet them!
Pearl will have a big presence at SIGGRAPH this year! We’re doing our making-of presentation in a Production Session on Sunday, July 24th from 10:45-12:15, and showing it on the Vive in the VR Village all day from Sunday through Thursday. Pearl will also be shown at the Appy Hour event on Wednesday, July 27th from 5-7 pm.
I’ll only be there Sunday-Tuesday, but I’m sure looking forward to it!
This just in: next Wednesday, June 1, we’ll be screening “Pearl” in 2D, 360º and VR at an SF-SIGGRAPH event in San Francisco. We’ll also be doing a talk with some behind-the-scenes footage. Seating is limited, so if you’re in the Bay Area and want to attend, sign up now!
Okay, one more update for those of you in New York: we will also be doing the first ever live public demo of Patrick Osborne’s Spotlight Story “Pearl” in full 360° at Tribeca’s Interactive Playground on Saturday, April 16th. We’ll be there all day.
Here’s where you can find tickets to the event. Hope to see you there!
News is finally starting to come out about the project I’ve been working on for the past year at Google Spotlight Stories. It’s an interactive 360º story, directed by Patrick Osborne, called “Pearl”. We’re simultaneously making a 2D film version of the story, which will have its world premiere at the Tribeca Film Festival on Sunday, April 17th at 6pm.
It’s been an amazing experience so far, full of exciting artistic and technical challenges, and it’s opened my mind to some astonishing new things. I’ll post more about it when we’re done, but in the meantime, Cartoon Brew has a nice writeup with some images from the show. And if you’re in New York that weekend, swing by and say hi!
A few days ago the image above started going around the social networks, attributed to “a friend working on AI”. Apparently a deliberate leak, now we know where it came from: a research group at Google working on neural networks for image recognition. Their research is worth reading about, and the images are fascinating: Inceptionism.
I have to admit, machine learning makes me jealous. Why does the machine get to do the learning? Why can’t I do the learning? But projects like this make me feel a little better. When the black box opens and we see the writhing mass inside, we get to learn how the machine does the learning. Everyone wins.
And the machines still have some catching up to do. As soon as I saw the amazing gallery of inceptionist images, I recognized the “inceptionist style” from the mysterious grey squirrel. Could a neural network do that?
Cassidy Curtis's splendid display of colorful things.