All posts by Cassidy Curtis

“Dear Upstairs Neighbors” at Sundance Story Forum!

Here’s some exciting news: we made a short film! I worked on this for much of last year, alongside a multitalented and diverse group of artists, engineers, researchers, and musicians. Everyone wore several hats on this project, and for me it was a welcome return to character animation. I got to supervise a small but mighty team of animators (both 2D and 3D) and even animate a few shots myself! This was also a chance for me to dive into working with generative models, discover what they’re actually good for, and help the researchers developing them make them useful to us, so we could wield them as artistic tools. The process was full of really interesting surprises! You can read more about how we did it here

We’ve been invited to screen the film, and do an hour-long panel about how we made it, at the Sundance Institute’s Story Forum today. I’m thrilled to be back at Sundance (the last time I was here was for Word Wars back in 2004!)

Was Lorenzato a synaesthete?

At the Palácio das Artes in Belo Horizonte, we saw an exhibit of the paintings of Amadeo Luciano Lorenzato, a self-taught Brazilian modern artist. Most of the paintings were figurative, or semi-abstract. But tucked in among them was this unique piece: an arrangement of colored rectangles, each labeled with a letter, spelling out the phrase “adoro apreciar a alvorada e o poente” (“I love to enjoy the sunrise and sunset”).

If you know me (or anyone with synaesthesia) a familiar pattern will jump out at you right away: each letter is always represented by the same color. Every O is red, every E royal blue, etc.

Could it be that Lorenzato was a synaesthete, and these are his personal colors?

Digging deeper into the pattern here: each color is specific, not generic: for example, the letter P (which appears twice) is also blue, but it’s a deep navy blue, distinct from the royal blue of E. And the colors are unevenly distributed around the color wheel: there are four distinct shades of green (C, I, V, and T), two blues (E and P), only one each of red (O), yellow (N), brown (L), black (A), gray (R) and white (D), and no examples of orange or purple. This arbitrariness and specificity is typical of the grapheme-color mappings of most synaesthetes.

It’s remotely possible he may have been a pseudo-synesthete (someone wrongly believed to have synaesthesia because they used it in their art), but Lorenzato does not seem to have been the kind of artist to follow fashionable trends (see the note on the back of one of his paintings, below.) And a cursory sweep of the grapheme-color mappings of his contemporaries shows no match even slightly better than chance.

So, thus far, all evidence suggests this is a case of true synaesthesia. But since the man himself passed away thirty years ago, we may never know for sure. (If only he’d chosen a more pangrammatical verse for his painting…)

“Amadeo Luciano Lorenzato, self-taught painter and sniper. No schooling. Doesn’t follow trends. Doesn’t belong to little churches. Joins as he pleases.”

Big Wet Pixels Live at THU 2024

THU was an unforgettable experience. If you ever get the chance to go, I highly recommend it! I don’t have time right now to do a writeup of the full event, but I can tell you about one slice of it: my demo of Big Wet Pixels was a lot of fun!

One big concern I had in the weeks leading up to the event was the logistics. How could I show my system painting a live subject (ideally a volunteer) on stage in front of an audience? For the experience to work well, the setup would have to fit a lot of constraints: I’d have to be able to reach my laptop keyboard and see the screen. My camera would have to see the subject. The subject would have to see what’s happening on the screen, and also see me. And the audience would have to see the screen, the subject, and me. I wasn’t sure there was a solution to this problem in euclidean 3-space that didn’t violate any laws of physics.

Then I got to visit the venue.  And it looked like this:

The stage setup in the BYD Room at the Tróia Conference Center.

A stage in the center, five rows of seats all around, and great big LED screens on all four sides of the room! It was perfect. Better than perfect. It’s immersive, intimate, and equitable: there are literally no bad seats in the house. And from the stage, you feel close to every single person in the audience. This was by far the most innovative setup I’ve ever seen for a conference venue.

It took a couple of tech check sessions to get everything working (note to self: 15 minutes is not enough time to test out such a complex setup!) and I had to make a few last-minute tweaks to my code to make sure it would fit the aspect ratio of the big screens, and add controls to adjust the framing of the subject on the fly (arrow keys are much quicker than trying to adjust a tripod head supporting a long heavy lens!) but when the time came for the actual demo, everything worked perfectly.

Miguel Pólvora, a member of the THU team, helps me test out the system during our tech check the night before my talk. Here you can see the full setup: laptop on the table, Canon 80D on the tripod, connected to the laptop via USB-C, and a chair on the other side of the stage. I especially liked how the colorful room lighting worked to provide a contrasting backdrop for the subject.
A particularly nice screenshot of Miguel from our tech check. Look at those gorgeous flow patterns!

I didn’t do much advance planning for how I’d operate the system, I just improvised in response to what was happening in the moment. So we only ended up exploring a sliver of the vast parameter space. But the participants that came up on stage seemed to have a lot of fun with it, posing for the camera, making faces, and using their hands and other props.

Olga Andryenko
Flavia Chiofalo

The audience asked really interesting questions, and I encouraged them to give me suggestions for what to do with the controls. (In a nod to Zach Lieberman, I told them, “the DJ does take requests!”) Before I knew it, we were out of time, and I had to pack up my gear and leave this beautiful room for the next speaker.

A panoramic view of the venue.
A few of the participants who sat for their portraits and/or asked interesting questions in the Q&A.
Continue reading Big Wet Pixels Live at THU 2024

Big Wet Pixels 14

Raquel practices guitar while my watercolor simulation paints her portrait.

Here you can also see a first glimpse of the GUI I’m building for live performance. This was my first dry run of the whole system, including webcam and external monitor. Still could use some optimization: the frame rate dropped to 16fps, I think due to thermal throttling. (You can hear how hard the poor laptop is working… just listen to that fan!)

Big Wet Pixels 12

Spent some time refining control of the instrument: a richer parameter space for stroke planning; new colorspace mapping that handles poor camera conditions (over- or underexposure); two new types of limited color palette (inspired by the gorgeous plein air experiments of my friend Susan Hayden). I’ve also exposed a bunch of fluid sim parameters so they can be tweaked on the fly for maximum effect. This is starting to feel almost robust enough to play live on stage. Lots more work to do though…

Trojan Unicorn 2024!


Okay here’s something fun: this goofball will be speaking at the THU Main Event 2024 in Tróia, Portugal! For those who don’t know it, THU is an annual gathering of artists working in film, games, comics, and more. Or, as the organizers describe it: “a 6-day event designed to be a transformative experience filled with innovation, genuine connections, and a creative mindset. It’s the perfect place to take some time for yourself, reset, and fill your mind with innovation and creativity. THU2024 marks the 10th edition, and it promises to be an epic event with an unbelievable lineup of artists and the ultimate creative reboot!”

I’ll be giving talks about animation, non-photorealistic rendering, and my recent research, and (if all goes as planned) also a live demo of Big Wet Pixels with audience participation!

Big Wet Pixels: Friends and Family

Here are some portraits of friends and family I’ve made over the past few months. I love seeing people through this strange lens. Originally I imagined this system as something that would stand on its own in a gallery, automatically painting whoever stopped by. But there’s something special about the way people react in real time as the machine paints their portrait. There’s a feedback loop between subject, painter, and machine. So now I’m working on making the system more interactive, so people can see what’s going on inside the black box, and I can play the controls like a musical instrument.

COGGRAPH 2024

Image excerpted from Understanding Comics by Scott McCloud (1993)

Here’s something new! A group of researchers from MIT, Stanford, Cambridge and UW Madison have put together a new interdisciplinary workshop “at the interface between cognitive science 🧠 and computer graphics 🫖“, aptly named COGGRAPH. I’ll be on a panel about non-photorealistic rendering, next Tuesday, July 16th, at 11am Pacific (2pm ET). It’s virtual, free, and open to the public. If you’re interested, you can sign up here to see it!