Category Archives: graphics

“Believable Acting” video now online

ACM SIGGRAPH has posted the video of my April 12 talk about our team’s work on believable acting for autonomous animated characters. This was a really fun one to do. Most conferences limit you to 25 minutes for technical talks, but we’ve always had a lot more material than that! The San Francisco SIGGRAPH chapter’s talk format is comfortably open-ended, so I was able to spend a full hour and go a lot deeper without rushing through it, and still leave plenty of time for Q&A.

Huge thanks to Henry LaBounta and the SF SIGGRAPH organizers for inviting me, and to the audience for showing up and asking such thoughtful and interesting questions!

Believable Acting, April 12 at SF-SIGGRAPH

Hey animation heads, AI enthusiasts, game developers, and curious people of all kinds! Have you ever wondered why video game characters often seem so robotic, compared to animated characters in movies? Have you ever wished for some way to bridge that gap? This is what my team, Synthetic Characters, has been working on at Google Research. I’ll be giving a talk about our paper, Toward Believable Acting for Autonomous Animated Characters, for the San Francisco SIGGRAPH chapter, next Wednesday, April 12, at 6pm Pacific time.

The event is online, free, and open to the public– but if you want to see it, you have to sign up in advance. Here’s the link to reserve your spot!

Our autonomous character, Axo, acting on its own motivations.

Impossible Paint: Asemic Writing

The Genuary prompt for day 14 is “asemic”, i.e. writing without meaning, which is something I’ve always loved. I thought it might be fun to try doing that with my watercolor simulation. Reader, I was not disappointed.

When we rerun the simulation with a different random seed each time, it comes to life in a different way. It turns out the Perlin noise that drives the brush movement isn’t affected by the seed, so “what” it writes stays the same, while “how” it’s written changes. The consistency seems to deepen the illusion of intentionality, which makes me super happy.

This isn’t my first time tinkering with procedurally generated asemic writing. That was in 1996, when I was working at PDI in Sunnyvale. There was a small group of us who were curious about algorithmic art, and we formed a short-lived club (unofficially known as “Pacific Dada Images”) that was much in the spirit of Genuary: we’d set ourselves a challenge, go off to our desks to tinker, and then meet in the screening room to share the results. The video above came from the challenge: “you have one hour to generate one minute of footage, using any of the software in PDI’s toolset”. I generated the curves in PDI’s homegrown script programming language, and rendered them using a command line tool called mp2r (which Drew Olbrich had written for me to use on Brick-a-Brac).

Genuary 2023: Impossible Paint

I love to tinker with code that makes pictures. Most of that tinkering happens in private, often because it’s for a project that’s still under wraps. But I so enjoy watching the process and progress of generative artists who post their work online, and I’ve always thought it would be fun to share my own stuff that way. So when I heard about Genuary, the pull was too strong to resist.

Here’s a snapshot of some work in progress, using a realtime watercolor simulator I’ve been writing in Unity. Some thoughts on what I’m doing here: it turns out I’m not super interested in mimicking reality. But I get really excited about the qualia of real materials, because they kick back at you in such wonderful and unexpected ways. What I seem to be after is a sort of caricature of those phenomena: I want it to give me those feelings, and if it bends the laws of physics, so much the better. Thus, Impossible Paint.

Toward Believable Acting for Autonomous Animated Characters

Last month I had the pleasure of presenting some of my team’s recent research at MIG ’22. It’s our first publication, on a topic I care deeply about: acting for autonomous animated characters. Why do NPCs in video games seem so far behind, in terms of acting ability, compared to their counterparts in animated movies? How might we make their acting more believable? This is one of those hard, fascinating problems that are just starting to become tractable thanks to recent advances in machine learning. I’ll have more to say about it soon, but for now, here’s a short video that explains the first small steps we’ve taken in that direction:

You can find the above video, our paper, and also a recording of our 25-minute talk on our new site for the project: https://believable-acting.github.io/

Branching sand patterns

Here in Sargí, Brazil, when it isn’t raining, we get to take a lot of long walks on the beach. One feature we noticed right away were these unusual patterns just on top of the surface: little clusters of wiggly lines made of light sand that contrasted sharply against the dark, damp, compact sand beneath. Some were small and isolated, while others formed dense networks. We wondered out loud: what were these shapes, and where did they come from? Were they the trails of some tiny worm or crustacean? Detritus tossed up from the digging of underground warrens? That was our best guess on the first day. But the shapes were so tiny— barely wider than a few grains of sand— and we never saw any evidence of whatever life we imagined was creating them.

Later in the week, the weather changed, and the shapes changed too. The lines got longer, and they seemed to favor certain directions more than others. In particular, there was a strong breeze blowing up the coast, and the lines were oriented in the direction of the wind. Also there was something vaguely familiar about the way the shapes branched out and meandered, but I couldn’t put my finger on it.

It took a few more miles of walking, staring, and spacing out before it hit me. I knew where I’d seen shapes like this before: senior year of high school, on the screen of my Amiga 1000.

Like a lot of kids at that particular time, I was into fractals. I’d coded up renderers for Mandelbrot sets (in BASIC, super slow!) and other forms of emergent weirdness. And for one class project, I picked diffusion-limited aggregation: a simulation of fractal growth based on randomly meandering particles that stick together when they touch, creating shapes that look like lightning bolts, branching trees or coral fans. (This technique has since been used to great effect by digital artists and creative coders of all sorts.)

Looking down at the sand, I realized what I was looking at was, literally, exactly the process I had simulated (oh so slowly!) on that home computer: an accretion of individual grains of sand, propelled by the wind until they hit an obstacle, at which point they stick firmly in place.

Here are a few more photos of these patterns. Having an idea of how they’re formed doesn’t make them any less fascinating—in fact, it only raises more questions, like: could you “read” the history of wind since the last high tide by analyzing these shapes? Just how much information is encoded in their twisty branches?

I wish I had more time to spend on this (not to mention, shoot some timelapse!) but we leave Sargí tomorrow. As we’ve said many times this trip, “deixa pra próxima.”

Monster Mash in Two Minute Papers!

If you’re any kind of graphics geek, you’re probably familiar with the outstanding YouTube channel, Two Minute Papers. If not, you’re in for a treat! In this series, Károly Zsolnai-Fehér picks papers from the latest computer graphics and vision conferences, edits their videos and adds commentary and context to highlight the most interesting bits of the research. But what really makes the series great is his delivery: he is so genuinely excited about the fast pace of graphics research, it’s pretty much impossible to come away without catching some of that excitement yourself.

What an honor to have that firehose of enthusiasm pointed at our work for two minutes!

Monster Mash

This past January I had the incredible good fortune to fall sideways into a wonderful graphics research project. How it came about is pure serendipity: I had coffee with my advisor from UW, who’d recently started at Google. He asked if I’d like to test out some experimental sketch-based animation software one of his summer interns was developing. I said sure, thinking I might spend an hour or two… but the software was so much fun to use, I couldn’t stop playing with it. One thing led to another, and now we have a paper in SIGGRAPH Asia 2020!

Have you ever wished you could just jot down a 3D character and animate it super quick, without all that tedious modeling, rigging and keyframing? That’s what Monster Mash is for: casual 3D animation. Here’s how it works: