Continuing to explore grids of big wet pixels. This one could even be considered to fit today’s Genuary prompt, “8×8”, if you squint at it. I’m getting to the point with Unity and C# where it’s starting to feel less like work, and more like play. More to come soon.
Still exploring big wet pixels (originally inspired by the #genuary4 prompt) using my watercolor simulation in Unity. Now the pixels are actually pixels: given a random selection of pigments and paper, they try their best to match the color coming in through my webcam. Lovely glitches ensue.
To get this working, I had to go back and solve an old problem that’s bothered me for decades: given an arbitrary set of three pigments and paper, what combination of pigment densities will produce the closest match for any given RGB color? This is non-trivial, because the gamut described by three Kubelka-Munk pigments is non-linear, not necessarily convex, and might even not be an embedding! In our 1997 paper we addressed that problem in a really crude way, which I was never very happy with: quantize the pigment densities into bins, and find the nearest bin in RGB space using a 3d-tree search. So it gave me great satisfaction last weekend when I implemented a continuous solution, using gradient descent.
The curved RGB color gamut described by a trio of semi-opaque white, amber and green pigments on purple paper. The white sphere represents the RGB color we’d like to match. A smaller, colored sphere represents the closest approximation that can be produced within the color gamut. A thin, meandering line shows the path taken from the middle of the gamut via gradient descent.
ACM SIGGRAPH has posted the video of my April 12 talk about our team’s work on believable acting for autonomous animated characters. This was a really fun one to do. Most conferences limit you to 25 minutes for technical talks, but we’ve always had a lot more material than that! The San Francisco SIGGRAPH chapter’s talk format is comfortably open-ended, so I was able to spend a full hour and go a lot deeper without rushing through it, and still leave plenty of time for Q&A.
Huge thanks to Henry LaBounta and the SF SIGGRAPH organizers for inviting me, and to the audience for showing up and asking such thoughtful and interesting questions!
Hey animation heads, AI enthusiasts, game developers, and curious people of all kinds! Have you ever wondered why video game characters often seem so robotic, compared to animated characters in movies? Have you ever wished for some way to bridge that gap? This is what my team, Synthetic Characters, has been working on at Google Research. I’ll be giving a talk about our paper, Toward Believable Acting for Autonomous Animated Characters, for the San Francisco SIGGRAPH chapter, next Wednesday, April 12, at 6pm Pacific time.
The event is online, free, and open to the public– but if you want to see it, you have to sign up in advance. Here’s the link to reserve your spot!
The Genuary prompt for day 14 is “asemic”, i.e. writing without meaning, which is something I’ve always loved. I thought it might be fun to try doing that with my watercolor simulation. Reader, I was not disappointed.
When we rerun the simulation with a different random seed each time, it comes to life in a different way. It turns out the Perlin noise that drives the brush movement isn’t affected by the seed, so “what” it writes stays the same, while “how” it’s written changes. The consistency seems to deepen the illusion of intentionality, which makes me super happy.
This isn’t my first time tinkering with procedurally generated asemic writing. That was in 1996, when I was working at PDI in Sunnyvale. There was a small group of us who were curious about algorithmic art, and we formed a short-lived club (unofficially known as “Pacific Dada Images”) that was much in the spirit of Genuary: we’d set ourselves a challenge, go off to our desks to tinker, and then meet in the screening room to share the results. The video above came from the challenge: “you have one hour to generate one minute of footage, using any of the software in PDI’s toolset”. I generated the curves in PDI’s homegrown script programming language, and rendered them using a command line tool called mp2r (which Drew Olbrich had written for me to use on Brick-a-Brac).
Now with randomized physical properties: staining, deposition, specific weight, granulation. This is starting to do things I don’t understand, which is always a good thing.
I love to tinker with code that makes pictures. Most of that tinkering happens in private, often because it’s for a project that’s still under wraps. But I so enjoy watching the process and progress of generative artists who post their work online, and I’ve always thought it would be fun to share my own stuff that way. So when I heard about Genuary, the pull was too strong to resist.
Here’s a snapshot of some work in progress, using a realtime watercolor simulator I’ve been writing in Unity. Some thoughts on what I’m doing here: it turns out I’m not super interested in mimicking reality. But I get really excited about the qualia of real materials, because they kick back at you in such wonderful and unexpected ways. What I seem to be after is a sort of caricature of those phenomena: I want it to give me those feelings, and if it bends the laws of physics, so much the better. Thus, Impossible Paint.
Last month I had the pleasure of presenting some of my team’s recent research at MIG ’22. It’s our first publication, on a topic I care deeply about: acting for autonomous animated characters. Why do NPCs in video games seem so far behind, in terms of acting ability, compared to their counterparts in animated movies? How might we make their acting more believable? This is one of those hard, fascinating problems that are just starting to become tractable thanks to recent advances in machine learning. I’ll have more to say about it soon, but for now, here’s a short video that explains the first small steps we’ve taken in that direction:
You can find the above video, our paper, and also a recording of our 25-minute talk on our new site for the project: https://believable-acting.github.io/
Cassidy Curtis's splendid display of colorful things.