Hey animation heads, AI enthusiasts, game developers, and curious people of all kinds! Have you ever wondered why video game characters often seem so robotic, compared to animated characters in movies? Have you ever wished for some way to bridge that gap? This is what my team, Synthetic Characters, has been working on at Google Research. I’ll be giving a talk about our paper, Toward Believable Acting for Autonomous Animated Characters, for the San Francisco SIGGRAPH chapter, next Wednesday, April 12, at 6pm Pacific time.
The event is online, free, and open to the public– but if you want to see it, you have to sign up in advance. Here’s the link to reserve your spot!
The web app had been languishing for years, as more and more developers stopped supporting the Flash player it depended on: first Apple refused to allow it to run on iOS, and then Google’s Chrome browser stopped allowing it, and finally in 2020 Adobe retired the format entirely. But then last year, I learned about Ruffle.rs, a shiny new Flash player emulator developed in Rust. I tried it out, but it had some missing features that broke our user interface. So I filed a bug report, but I didn’t have great expectations that it’d get fixed anytime soon. After all, it’s an open source project run by volunteers, who I’m sure have much more important things to do than fixing bugs in weird old web art projects.
Well, this weekend, one of Ruffle’s amazing and generous developers went ahead and added the missing feature. And just like that, our app is up and running again! Not only that, but it runs in places it has never run before, like iPhones and other iOS devices!
The experience on iOS isn’t perfect, mainly because we developed the UI for desktop computers with keyboards and mice, not touchscreens and thumbs (remember, this was about five years before the first iPhone came out!) Some features, like the tooltips that appear when you hover over a button, will never work on a touchscreen, because there’s no such thing as hovering without clicking. Other things just feel clunky, like the fact that you can’t pinch to zoom (another now-ubiquitous UX metaphor that hadn’t yet been popularized.) But even with those limitations, seeing our twenty-year-old project running on modern hardware is a total thrill.
I’m incredibly grateful to the Ruffle developers for making this possible. The world may be a mess, but communities like this are a good reminder that sometimes, if we work together, we can have nice things.
My dear friend Eric Rodenbeck has been experimenting with creating his own homemade inks and paints from natural materials. Some of the inks mysteriously change in texture, and even color, as they dry. After months of looking at Eric’s paintings, I was intensely curious to see how these changes would look as they were happening. So, of course, I had to shoot some timelapse footage.
The inks I used here are hibiscus + lemon (pale red), hibiscus + orange peel (magenta), carrot greens + alum (yellow), and a sprinkling of sea salt for texture. Time span: about 1 hour.
If you pay close attention, something really strange happens about 11 seconds in to the video, when I added some yellow ink: wherever the yellow mixes with the magenta, the mixture turns a deep bluish green! What is going on there?
It turns out that hibiscus gets its color from a type of pigment called an anthocyanin, whose structure and color are pH-sensitive. In an acidic environment, it’s red, but when exposed to an alkaline it turns blue. Since the yellow ink is alkaline, it turns the red hibiscus blue on contact, which then mixes with the ink’s yellow pigment, becoming a lovely vibrant green.
Here are some more photos from the day. Hopefully this will be the first of many such experiments!
The Genuary prompt for day 14 is “asemic”, i.e. writing without meaning, which is something I’ve always loved. I thought it might be fun to try doing that with my watercolor simulation. Reader, I was not disappointed.
When we rerun the simulation with a different random seed each time, it comes to life in a different way. It turns out the Perlin noise that drives the brush movement isn’t affected by the seed, so “what” it writes stays the same, while “how” it’s written changes. The consistency seems to deepen the illusion of intentionality, which makes me super happy.
This isn’t my first time tinkering with procedurally generated asemic writing. That was in 1996, when I was working at PDI in Sunnyvale. There was a small group of us who were curious about algorithmic art, and we formed a short-lived club (unofficially known as “Pacific Dada Images”) that was much in the spirit of Genuary: we’d set ourselves a challenge, go off to our desks to tinker, and then meet in the screening room to share the results. The video above came from the challenge: “you have one hour to generate one minute of footage, using any of the software in PDI’s toolset”. I generated the curves in PDI’s homegrown script programming language, and rendered them using a command line tool called mp2r (which Drew Olbrich had written for me to use on Brick-a-Brac).
Now with randomized physical properties: staining, deposition, specific weight, granulation. This is starting to do things I don’t understand, which is always a good thing.
I love to tinker with code that makes pictures. Most of that tinkering happens in private, often because it’s for a project that’s still under wraps. But I so enjoy watching the process and progress of generative artists who post their work online, and I’ve always thought it would be fun to share my own stuff that way. So when I heard about Genuary, the pull was too strong to resist.
Here’s a snapshot of some work in progress, using a realtime watercolor simulator I’ve been writing in Unity. Some thoughts on what I’m doing here: it turns out I’m not super interested in mimicking reality. But I get really excited about the qualia of real materials, because they kick back at you in such wonderful and unexpected ways. What I seem to be after is a sort of caricature of those phenomena: I want it to give me those feelings, and if it bends the laws of physics, so much the better. Thus, Impossible Paint.
Last month I had the pleasure of presenting some of my team’s recent research at MIG ’22. It’s our first publication, on a topic I care deeply about: acting for autonomous animated characters. Why do NPCs in video games seem so far behind, in terms of acting ability, compared to their counterparts in animated movies? How might we make their acting more believable? This is one of those hard, fascinating problems that are just starting to become tractable thanks to recent advances in machine learning. I’ll have more to say about it soon, but for now, here’s a short video that explains the first small steps we’ve taken in that direction:
You can find the above video, our paper, and also a recording of our 25-minute talk on our new site for the project: https://believable-acting.github.io/
Here in SargĂ, Brazil, when it isn’t raining, we get to take a lot of long walks on the beach. One feature we noticed right away were these unusual patterns just on top of the surface: little clusters of wiggly lines made of light sand that contrasted sharply against the dark, damp, compact sand beneath. Some were small and isolated, while others formed dense networks. We wondered out loud: what were these shapes, and where did they come from? Were they the trails of some tiny worm or crustacean? Detritus tossed up from the digging of underground warrens? That was our best guess on the first day. But the shapes were so tiny— barely wider than a few grains of sand— and we never saw any evidence of whatever life we imagined was creating them.
Later in the week, the weather changed, and the shapes changed too. The lines got longer, and they seemed to favor certain directions more than others. In particular, there was a strong breeze blowing up the coast, and the lines were oriented in the direction of the wind. Also there was something vaguely familiar about the way the shapes branched out and meandered, but I couldn’t put my finger on it.