The web app had been languishing for years, as more and more developers stopped supporting the Flash player it depended on: first Apple refused to allow it to run on iOS, and then Google’s Chrome browser stopped allowing it, and finally in 2020 Adobe retired the format entirely. But then last year, I learned about Ruffle.rs, a shiny new Flash player emulator developed in Rust. I tried it out, but it had some missing features that broke our user interface. So I filed a bug report, but I didn’t have great expectations that it’d get fixed anytime soon. After all, it’s an open source project run by volunteers, who I’m sure have much more important things to do than fixing bugs in weird old web art projects.
Well, this weekend, one of Ruffle’s amazing and generous developers went ahead and added the missing feature. And just like that, our app is up and running again! Not only that, but it runs in places it has never run before, like iPhones and other iOS devices!
The experience on iOS isn’t perfect, mainly because we developed the UI for desktop computers with keyboards and mice, not touchscreens and thumbs (remember, this was about five years before the first iPhone came out!) Some features, like the tooltips that appear when you hover over a button, will never work on a touchscreen, because there’s no such thing as hovering without clicking. Other things just feel clunky, like the fact that you can’t pinch to zoom (another now-ubiquitous UX metaphor that hadn’t yet been popularized.) But even with those limitations, seeing our twenty-year-old project running on modern hardware is a total thrill.
I’m incredibly grateful to the Ruffle developers for making this possible. The world may be a mess, but communities like this are a good reminder that sometimes, if we work together, we can have nice things.
My dear friend Eric Rodenbeck has been experimenting with creating his own homemade inks and paints from natural materials. Some of the inks mysteriously change in texture, and even color, as they dry. After months of looking at Eric’s paintings, I was intensely curious to see how these changes would look as they were happening. So, of course, I had to shoot some timelapse footage.
The inks I used here are hibiscus + lemon (pale red), hibiscus + orange peel (magenta), carrot greens + alum (yellow), and a sprinkling of sea salt for texture. Time span: about 1 hour.
If you pay close attention, something really strange happens about 11 seconds in to the video, when I added some yellow ink: wherever the yellow mixes with the magenta, the mixture turns a deep bluish green! What is going on there?
Magenta + Yellow = Green?
It turns out that hibiscus gets its color from a type of pigment called an anthocyanin, whose structure and color are pH-sensitive. In an acidic environment, it’s red, but when exposed to an alkaline it turns blue. Since the yellow ink is alkaline, it turns the red hibiscus blue on contact, which then mixes with the ink’s yellow pigment, becoming a lovely vibrant green.
Here are some more photos from the day. Hopefully this will be the first of many such experiments!
The Genuary prompt for day 14 is “asemic”, i.e. writing without meaning, which is something I’ve always loved. I thought it might be fun to try doing that with my watercolor simulation. Reader, I was not disappointed.
When we rerun the simulation with a different random seed each time, it comes to life in a different way. It turns out the Perlin noise that drives the brush movement isn’t affected by the seed, so “what” it writes stays the same, while “how” it’s written changes. The consistency seems to deepen the illusion of intentionality, which makes me super happy.
This isn’t my first time tinkering with procedurally generated asemic writing. That was in 1996, when I was working at PDI in Sunnyvale. There was a small group of us who were curious about algorithmic art, and we formed a short-lived club (unofficially known as “Pacific Dada Images”) that was much in the spirit of Genuary: we’d set ourselves a challenge, go off to our desks to tinker, and then meet in the screening room to share the results. The video above came from the challenge: “you have one hour to generate one minute of footage, using any of the software in PDI’s toolset”. I generated the curves in PDI’s homegrown script programming language, and rendered them using a command line tool called mp2r (which Drew Olbrich had written for me to use on Brick-a-Brac).
Now with randomized physical properties: staining, deposition, specific weight, granulation. This is starting to do things I don’t understand, which is always a good thing.
It’s an interesting time to be an artist. As machine learning becomes part of the toolkit, in different ways for different people, new ideas are shaking loose, and I feel compelled to write about them as a way of wrapping my head around the whole thing.
The most recent headquake hit me by way of the ML-assisted album Chain Tripping by post-punk-pop band YACHT. Here’s a great Google I/O talk by bandleader Claire Evans that describes just how they made it. (Tl;dr: no, the machines are not coming for your jobs, songwriters! Using ML actually made the process slower: it took them three years to finish the album.) This case is interesting for what it tells us about not just the limitations of current AI techniques, but also the creative process, and what makes people enjoy music.
In music there’s this idea that enjoyment comes from a combination of the familiar and the unexpected. For example, a familiar arrangement of instruments, a familiar playing style, with a surprising melody or bass line. Maybe it works like visual indeterminacy: it keeps you interested by keeping you guessing.
As genres go, pop music is particularly information-sparse. What I take from YACHT’s example is that low level noise— nearly random arrangements of words and notes— can produce occasional bursts of semi-intelligible stuff. By manually curating the best of that stuff and arranging it, they pushed the surprise factor well above the threshold of enjoyability for a pop song. And then they provided the familiarity piece by playing their instruments and singing in their own recognizable style. The result: it’s pretty damn catchy.
So if you like the album, what is it exactly that you like? It sounds to me like what you’re enjoying is not so much the ML algorithm’s copious output of melodies and lyrics, but YACHT’s taste in selecting the gems from within it. So far, so good. But there’s another piece of this puzzle that makes me question whether this analysis is going deep enough.
Lyrics from SCATTERHEAD: Time flies and I feel / but I can’t hear / palm of your eye / is it empty, memory?
The first time I watched the video for SCATTERHEAD, one lyric fragment jumped out at me: “palm of your eye”. I’m not alone: NPR Music’s review calls it out specifically as a “lovely phrase … which pins the lilting chorus into place”. But it jumped out at me for a rather different reason: I’d heard those exact words before. I immediately recognized them from Joanna Newsom’s 2004 song Peach, Plum, Pear.
I have read the right books / to interpret your look / you were knocking me down / with the palm of your eye
At the time, not knowing anything about YACHT’s process, I assumed they were making an overt, knowing reference to Newsom’s song. But then I learned how they generated their lyrics: they trained the ML model on the lyrics of their own back catalog plus the entire discography of all of the artists that influenced them. This opens up another plausible explanation: it could be that Newsom was among those influencers, the model lifted her lyric whole cloth, and YACHT simply failed to recognize it. If that’s the case, it would mean the ML model performed a sort of money-laundering operation on authorship. YACHT gets plausible deniability. Everyone wins.
This sounds like a scathing indictment of YACHT or of ML, but I honestly don’t mean it that way. It really isn’t that different from what happens in the creative process normally. Humans are notoriously bad at remembering where their own ideas come from. It’s all too common for two people to walk away from a shared conversation, each thinking he came up with a key idea. For example: witness the recent kerfuffle about the Ganbreeder images, created by one artist using software developed by another artist, unknowingly appropriated by a third artist who thought he had “discovered” it in latent space, and then exhibited and sold in a gallery. So, great, now we have yet one more way that ML can cloud questions of authorship in art.
But maybe authorship isn’t actually as important as we think it is. Growing up in our modern capitalist society, we’ve been trained to value the idea of intellectual property. It’s baked into how working artists earn their living, and it permeates all kinds of conversations around art and technology. We assume that coming up with an original idea means you own that idea (dot dot dot, profit!) But capitalism is a pretty recent invention, and for most of human history this is not how culture worked. Good ideas take hold in a culture by being shared, repeated, modded and remixed. Maybe there’s a way forward from here, to a world where culture can be culture, and artists can survive and even thrive, without the need to cordon off and commodify every little thing they do. It’s a nice dream, at any rate.
At some level this is just me, sticking a toe in the water, as I get ready to add ML to my own toolkit. (It’s taken me this long to get over my initial discomfort at the very thought of it…) When I do jump in, we’ll see how long I can keep my eyes open.
I’m talking about “Spider-Man: Into the Spider-Verse” of course. I’ve seen it twice on the big screen, and already want to see it again. (If you still haven’t seen it, you are missing a major milestone in film history. Get off your tuchis and go to the movies already!)
There’s a moment in the film when our newly super-empowered Afro-Latino hero Miles Morales and the original Spider-Man Peter Parker meet for the first time. Their spidey-senses activate, and suddenly they both realize what they have in common. “You’re like me!” That moment of recognition, beyond its first purpose of conveying the powerful “anyone can wear the mask” message of inclusion, hit me personally on a whole different level. I found myself looking through the screen, senses buzzing, at the amazing team of artists and technologists who made it, people who really get it: the idea that when you take the art seriously, when you use every step of the process to amplify that artistic voice instead of sanding off its rough edges, when you’re willing to break the pipeline and challenge “how it’s usually done”, that’s when you can make something special, unique, and meaningful. This movie is a triumph, and every single person involved in making it should be incredibly proud. I see what you did, I know exactly how hard it was to do it, and I see you.
I can’t wait to watch this a few more times to soak in all the details– the smear frames, the animation on twos, the silhouette lines and suggestive contours, the halftones and Kirby dots, the CMYK misprints, the world-class acting choices, the strong poses, the colors and lighting, that crazy Sinkiewicz flashback, all of it.
I also hope this marks a turning point for the animation industry. Listen to your artists. Trust them. Let their work shine on the big screen the way they meant it to look. And don’t let anyone tell you what “can’t be done” with the look of your film. The non-photorealistic rendering community has been building the technology to do this, literally, for decades. Let’s use it!
Social media are always showing me photos of “People You May Know”. I usually don’t know them. But they look interesting, and I need practice. So I draw them.
Look, I made a Tumblr. I might even update it once in a while.