Category Archives: video

Garden Timelapse: Sugar Ann Peas

Time-lapse video of sugar pea plants growing from tiny sprouts over the course of six weeks.

I’ve continued to shoot timelapse video of the various veggies we grow in the garden. Here’s one more from last fall, of some sugar snap peas. They grew so fast I had to shoot twice a day! I was blown away by the origami-like way each pair of leaves unfolds as the vines grow, and the branching tendrils that whip out and grab on to whatever they can find. You might also notice some sudden changes in early September: we had a brutal heat wave that almost killed the plants. Some quick thinking and an old kid-sized umbrella saved them from total destruction. Not all of the plants survived, but the ones that made it produced crunchy pods as sweet as candy. Will definitely grow these again!

“Believable Acting” video now online

ACM SIGGRAPH has posted the video of my April 12 talk about our team’s work on believable acting for autonomous animated characters. This was a really fun one to do. Most conferences limit you to 25 minutes for technical talks, but we’ve always had a lot more material than that! The San Francisco SIGGRAPH chapter’s talk format is comfortably open-ended, so I was able to spend a full hour and go a lot deeper without rushing through it, and still leave plenty of time for Q&A.

Huge thanks to Henry LaBounta and the SF SIGGRAPH organizers for inviting me, and to the audience for showing up and asking such thoughtful and interesting questions!

Homemade ink timelapse

My dear friend Eric Rodenbeck has been experimenting with creating his own homemade inks and paints from natural materials. Some of the inks mysteriously change in texture, and even color, as they dry. After months of looking at Eric’s paintings, I was intensely curious to see how these changes would look as they were happening. So, of course, I had to shoot some timelapse footage.

The inks I used here are hibiscus + lemon (pale red), hibiscus + orange peel (magenta), carrot greens + alum (yellow), and a sprinkling of sea salt for texture. Time span: about 1 hour.

If you pay close attention, something really strange happens about 11 seconds in to the video, when I added some yellow ink: wherever the yellow mixes with the magenta, the mixture turns a deep bluish green! What is going on there?

Timelapse video of homemade inks flowing across watercolor paper. Some yellow ink flows into a puddle of magenta, and wherever the two inks touch, the mixture turns a deep bluish green color.
Magenta + Yellow = Green?

It turns out that hibiscus gets its color from a type of pigment called an anthocyanin, whose structure and color are pH-sensitive. In an acidic environment, it’s red, but when exposed to an alkaline it turns blue. Since the yellow ink is alkaline, it turns the red hibiscus blue on contact, which then mixes with the ink’s yellow pigment, becoming a lovely vibrant green.

Here are some more photos from the day. Hopefully this will be the first of many such experiments!

Homemade inks

Impossible Paint: Asemic Writing

The Genuary prompt for day 14 is “asemic”, i.e. writing without meaning, which is something I’ve always loved. I thought it might be fun to try doing that with my watercolor simulation. Reader, I was not disappointed.

When we rerun the simulation with a different random seed each time, it comes to life in a different way. It turns out the Perlin noise that drives the brush movement isn’t affected by the seed, so “what” it writes stays the same, while “how” it’s written changes. The consistency seems to deepen the illusion of intentionality, which makes me super happy.

This isn’t my first time tinkering with procedurally generated asemic writing. That was in 1996, when I was working at PDI in Sunnyvale. There was a small group of us who were curious about algorithmic art, and we formed a short-lived club (unofficially known as “Pacific Dada Images”) that was much in the spirit of Genuary: we’d set ourselves a challenge, go off to our desks to tinker, and then meet in the screening room to share the results. The video above came from the challenge: “you have one hour to generate one minute of footage, using any of the software in PDI’s toolset”. I generated the curves in PDI’s homegrown script programming language, and rendered them using a command line tool called mp2r (which Drew Olbrich had written for me to use on Brick-a-Brac).

Genuary 2023: Impossible Paint

I love to tinker with code that makes pictures. Most of that tinkering happens in private, often because it’s for a project that’s still under wraps. But I so enjoy watching the process and progress of generative artists who post their work online, and I’ve always thought it would be fun to share my own stuff that way. So when I heard about Genuary, the pull was too strong to resist.

Here’s a snapshot of some work in progress, using a realtime watercolor simulator I’ve been writing in Unity. Some thoughts on what I’m doing here: it turns out I’m not super interested in mimicking reality. But I get really excited about the qualia of real materials, because they kick back at you in such wonderful and unexpected ways. What I seem to be after is a sort of caricature of those phenomena: I want it to give me those feelings, and if it bends the laws of physics, so much the better. Thus, Impossible Paint.

Toward Believable Acting for Autonomous Animated Characters

Last month I had the pleasure of presenting some of my team’s recent research at MIG ’22. It’s our first publication, on a topic I care deeply about: acting for autonomous animated characters. Why do NPCs in video games seem so far behind, in terms of acting ability, compared to their counterparts in animated movies? How might we make their acting more believable? This is one of those hard, fascinating problems that are just starting to become tractable thanks to recent advances in machine learning. I’ll have more to say about it soon, but for now, here’s a short video that explains the first small steps we’ve taken in that direction:

You can find the above video, our paper, and also a recording of our 25-minute talk on our new site for the project: https://believable-acting.github.io/

Monster Mash in Two Minute Papers!

If you’re any kind of graphics geek, you’re probably familiar with the outstanding YouTube channel, Two Minute Papers. If not, you’re in for a treat! In this series, Károly Zsolnai-Fehér picks papers from the latest computer graphics and vision conferences, edits their videos and adds commentary and context to highlight the most interesting bits of the research. But what really makes the series great is his delivery: he is so genuinely excited about the fast pace of graphics research, it’s pretty much impossible to come away without catching some of that excitement yourself.

What an honor to have that firehose of enthusiasm pointed at our work for two minutes!

Polarized Rainbow!

Maybe this should have been obvious, but it took me totally by surprise: rainbows are made entirely of polarized light! (I’m guessing this is because of how the light bounces off the insides of the raindrops on its way back to you.) So if you put on polarized lenses (like some sunglasses) and tilt your head sideways, you can make them disappear— or make them look twice as bright against the non-polarized sky!