New today: exploring making each pixel smarter, with more thoughtful brushstroke planning. Starting to get excited about the shapes and textures that emerge. (In case it’s not obvious, I’m reaching for a Chuck Close vibe here. But his work has all kinds of depth to it, I’m barely scratching the surface as of yet.) Also, I’ve added some new types of randomized color palettes, including interference pigments on dark paper. So many happy accidents. I don’t think I’ll ever get bored of this.
Category Archives: animation
Big Wet Pixels 5
Big Wet Pixels 5 from Cassidy Curtis on Vimeo.
Bigger, wetter, more pixelated! Playing with different strategies for sub-pixel brushstroke planning. There are so many possibilities…Big Wet Pixels 4
Big Wet Pixels 4 from Cassidy Curtis on Vimeo.
Continuing to explore grids of big wet pixels. This one could even be considered to fit today’s Genuary prompt, “8×8”, if you squint at it. I’m getting to the point with Unity and C# where it’s starting to feel less like work, and more like play. More to come soon.
Big Wet Pixels 3
Still exploring big wet pixels (originally inspired by the #genuary4 prompt) using my watercolor simulation in Unity. Now the pixels are actually pixels: given a random selection of pigments and paper, they try their best to match the color coming in through my webcam. Lovely glitches ensue.
To get this working, I had to go back and solve an old problem that’s bothered me for decades: given an arbitrary set of three pigments and paper, what combination of pigment densities will produce the closest match for any given RGB color? This is non-trivial, because the gamut described by three Kubelka-Munk pigments is non-linear, not necessarily convex, and might even not be an embedding! In our 1997 paper we addressed that problem in a really crude way, which I was never very happy with: quantize the pigment densities into bins, and find the nearest bin in RGB space using a 3d-tree search. So it gave me great satisfaction last weekend when I implemented a continuous solution, using gradient descent.
The curved RGB color gamut described by a trio of semi-opaque white, amber and green pigments on purple paper. The white sphere represents the RGB color we’d like to match. A smaller, colored sphere represents the closest approximation that can be produced within the color gamut. A thin, meandering line shows the path taken from the middle of the gamut via gradient descent.
Genuary 4: More Big Wet Pixels
Getting more aggressive with color and texture…
“Believable Acting” video now online
ACM SIGGRAPH has posted the video of my April 12 talk about our team’s work on believable acting for autonomous animated characters. This was a really fun one to do. Most conferences limit you to 25 minutes for technical talks, but we’ve always had a lot more material than that! The San Francisco SIGGRAPH chapter’s talk format is comfortably open-ended, so I was able to spend a full hour and go a lot deeper without rushing through it, and still leave plenty of time for Q&A.
Huge thanks to Henry LaBounta and the SF SIGGRAPH organizers for inviting me, and to the audience for showing up and asking such thoughtful and interesting questions!
Believable Acting, April 12 at SF-SIGGRAPH
Hey animation heads, AI enthusiasts, game developers, and curious people of all kinds! Have you ever wondered why video game characters often seem so robotic, compared to animated characters in movies? Have you ever wished for some way to bridge that gap? This is what my team, Synthetic Characters, has been working on at Google Research. I’ll be giving a talk about our paper, Toward Believable Acting for Autonomous Animated Characters, for the San Francisco SIGGRAPH chapter, next Wednesday, April 12, at 6pm Pacific time.
The event is online, free, and open to the public– but if you want to see it, you have to sign up in advance. Here’s the link to reserve your spot!
Impossible Paint: Asemic Writing
The Genuary prompt for day 14 is “asemic”, i.e. writing without meaning, which is something I’ve always loved. I thought it might be fun to try doing that with my watercolor simulation. Reader, I was not disappointed.
When we rerun the simulation with a different random seed each time, it comes to life in a different way. It turns out the Perlin noise that drives the brush movement isn’t affected by the seed, so “what” it writes stays the same, while “how” it’s written changes. The consistency seems to deepen the illusion of intentionality, which makes me super happy.
This isn’t my first time tinkering with procedurally generated asemic writing. That was in 1996, when I was working at PDI in Sunnyvale. There was a small group of us who were curious about algorithmic art, and we formed a short-lived club (unofficially known as “Pacific Dada Images”) that was much in the spirit of Genuary: we’d set ourselves a challenge, go off to our desks to tinker, and then meet in the screening room to share the results. The video above came from the challenge: “you have one hour to generate one minute of footage, using any of the software in PDI’s toolset”. I generated the curves in PDI’s homegrown script
programming language, and rendered them using a command line tool called mp2r
(which Drew Olbrich had written for me to use on Brick-a-Brac).
Impossible Paint: pigment studies 2
Now with randomized physical properties: staining, deposition, specific weight, granulation. This is starting to do things I don’t understand, which is always a good thing.
Impossible pigments for impossible paint
To make impossible paint, you need impossible pigments.