Category Archives: interactive

“Believable Acting” video now online

ACM SIGGRAPH has posted the video of my April 12 talk about our team’s work on believable acting for autonomous animated characters. This was a really fun one to do. Most conferences limit you to 25 minutes for technical talks, but we’ve always had a lot more material than that! The San Francisco SIGGRAPH chapter’s talk format is comfortably open-ended, so I was able to spend a full hour and go a lot deeper without rushing through it, and still leave plenty of time for Q&A.

Huge thanks to Henry LaBounta and the SF SIGGRAPH organizers for inviting me, and to the audience for showing up and asking such thoughtful and interesting questions!

Believable Acting, April 12 at SF-SIGGRAPH

Hey animation heads, AI enthusiasts, game developers, and curious people of all kinds! Have you ever wondered why video game characters often seem so robotic, compared to animated characters in movies? Have you ever wished for some way to bridge that gap? This is what my team, Synthetic Characters, has been working on at Google Research. I’ll be giving a talk about our paper, Toward Believable Acting for Autonomous Animated Characters, for the San Francisco SIGGRAPH chapter, next Wednesday, April 12, at 6pm Pacific time.

The event is online, free, and open to the public– but if you want to see it, you have to sign up in advance. Here’s the link to reserve your spot!

Our autonomous character, Axo, acting on its own motivations.

Graffiti Archaeology is back!

This just in: Graffiti Archaeology is officially back up and running!

Graffiti Archaeology running on an iPhone!

The web app had been languishing for years, as more and more developers stopped supporting the Flash player it depended on: first Apple refused to allow it to run on iOS, and then Google’s Chrome browser stopped allowing it, and finally in 2020 Adobe retired the format entirely. But then last year, I learned about Ruffle.rs, a shiny new Flash player emulator developed in Rust. I tried it out, but it had some missing features that broke our user interface. So I filed a bug report, but I didn’t have great expectations that it’d get fixed anytime soon. After all, it’s an open source project run by volunteers, who I’m sure have much more important things to do than fixing bugs in weird old web art projects.

Well, this weekend, one of Ruffle’s amazing and generous developers went ahead and added the missing feature. And just like that, our app is up and running again! Not only that, but it runs in places it has never run before, like iPhones and other iOS devices!

The experience on iOS isn’t perfect, mainly because we developed the UI for desktop computers with keyboards and mice, not touchscreens and thumbs (remember, this was about five years before the first iPhone came out!) Some features, like the tooltips that appear when you hover over a button, will never work on a touchscreen, because there’s no such thing as hovering without clicking. Other things just feel clunky, like the fact that you can’t pinch to zoom (another now-ubiquitous UX metaphor that hadn’t yet been popularized.) But even with those limitations, seeing our twenty-year-old project running on modern hardware is a total thrill.

I’m incredibly grateful to the Ruffle developers for making this possible. The world may be a mess, but communities like this are a good reminder that sometimes, if we work together, we can have nice things.

Toward Believable Acting for Autonomous Animated Characters

Last month I had the pleasure of presenting some of my team’s recent research at MIG ’22. It’s our first publication, on a topic I care deeply about: acting for autonomous animated characters. Why do NPCs in video games seem so far behind, in terms of acting ability, compared to their counterparts in animated movies? How might we make their acting more believable? This is one of those hard, fascinating problems that are just starting to become tractable thanks to recent advances in machine learning. I’ll have more to say about it soon, but for now, here’s a short video that explains the first small steps we’ve taken in that direction:

You can find the above video, our paper, and also a recording of our 25-minute talk on our new site for the project: https://believable-acting.github.io/

The return of Flash?

December 31, 2020 marked the official demise of Adobe (nee MacroMedia) Flash. On that day, a number of web-native interactive artworks (Graffiti Archaeology among them) disappeared from public view, seemingly forever. This felt like a classic case of “why we can’t have nice things”, and it made me sad, but of course we all have had much bigger things to worry about the past couple of years, so I tried to just let it go and move on.

Imagine my delight, then, when I learned about Ruffle, a Flash Player emulator written in Rust. I’ve been tinkering with it the past couple of days, and while it isn’t perfect, it’s really quite impressive! Super easy to install, and when it works, it works everywhere, no fuss.

The grafarc UI reveals a number of ways in which Ruffle seems to differ from the original Flash Player, which makes the user experience a lot jankier than we originally intended. (You can install the Ruffle browser extension if you want to see for yourself.) So, I won’t be switching that on by default just yet– I’ll see if I can work with the developers to fix those glitches first.

But I also tried it on my old synaesthesia visualization applet, and there it works perfectly! (On desktop, at least– because the applet requires you to type words on a keyboard, it won’t work on mobile devices just yet.) So I’m happy to report that with one line of code, an old interactive artwork that I believed long gone has returned, unscathed, from the abyss.

Long live Flash! And three cheers to the devs of Ruffle!

Monster Mash

This past January I had the incredible good fortune to fall sideways into a wonderful graphics research project. How it came about is pure serendipity: I had coffee with my advisor from UW, who’d recently started at Google. He asked if I’d like to test out some experimental sketch-based animation software one of his summer interns was developing. I said sure, thinking I might spend an hour or two… but the software was so much fun to use, I couldn’t stop playing with it. One thing led to another, and now we have a paper in SIGGRAPH Asia 2020!

Have you ever wished you could just jot down a 3D character and animate it super quick, without all that tedious modeling, rigging and keyframing? That’s what Monster Mash is for: casual 3D animation. Here’s how it works:

Age of Sail launches today!

The show we’ve been working on for more than a year is finally out in the world, in all its forms!  You can see it:
 

“Age of Sail” was directed by John Kahrs, and produced at Chromosphere, Evil Eye Pictures, and Google Spotlight Stories.  Working on this story, with this crew, has been an unforgettable experience. I’ll have lots more to say about it in future posts, but for now: enjoy the show!

Vote for “Pearl” for the FoST Prize!

screen-shot-2016-09-16-at-8-40-04-am

Pearl is one of twenty finalists for the Future of Storytelling Prize! Now’s your chance to vote for Pearl!

kaleidovr

In other news, we’ll also be showing Pearl at the Kaleidoscope VR Summer Showcase, which travels around the world to London, Seoul, Berlin, New York, San Francisco and Los Angeles. I’ll be at the SF event on September 30th.

screen-shot-2016-09-16-at-8-51-01-amPearl has also been nominated in three categories (Narrative, Mobile, and Original Score) for the Virtual Reality Foundation’s third annual Proto Awards, coming up on October 8th. The nominees all look amazing. I can’t wait to meet them!

Atlas of Emotions

For a few weeks last spring I had the tremendous pleasure of working with my dear friend Eric Rodenbeck on an amazing project: an Atlas of Emotions. Commissioned by the Dalai Lama, and based on decades of scientific research by Paul Ekman and his colleagues, the project aims to help people find a path through the complex landscape of their feelings toward a state of calm and happiness.

This was such a fresh and exciting experience. First, because Stamen is an absolutely lovely place to spend time for any reason. (Seriously: pineapple plants and bubble machines!) Second, because it forced a connection between parts of my brain that had never met before: emotion brain, meet design brain. Well, hello! My time on the project was brief and my contribution very small, but will that stop me from kvelling? No it will not! The rest of my feelings can be found right here.