Category Archives: perception hacking

COGGRAPH 2024

Image excerpted from Understanding Comics by Scott McCloud (1993)

Here’s something new! A group of researchers from MIT, Stanford, Cambridge and UW Madison have put together a new interdisciplinary workshop “at the interface between cognitive science 🧠 and computer graphics 🫖“, aptly named COGGRAPH. I’ll be on a panel about non-photorealistic rendering, next Tuesday, July 16th, at 11am Pacific (2pm ET). It’s virtual, free, and open to the public. If you’re interested, you can sign up here to see it!

Polarized Rainbow!

Maybe this should have been obvious, but it took me totally by surprise: rainbows are made entirely of polarized light! (I’m guessing this is because of how the light bounces off the insides of the raindrops on its way back to you.) So if you put on polarized lenses (like some sunglasses) and tilt your head sideways, you can make them disappear— or make them look twice as bright against the non-polarized sky!

Why do we hate high frame rates?

At the VR Storytelling Meetup last night, an interesting conversation with the other panelists got me thinking about frame rates again. Apparently, for filmmakers shooting live-action 360º video, the high frame rate required for playback in a VR device can be a bit of an obstacle. Not just technically, but psychologically: it’s a turnoff for the audience.

I felt that emotional turnoff when I finally saw Peter Jackson’s first Hobbit movie at 48 frames per second. It was astonishing and beautiful in the sweeping exterior shots. But when it was just characters sitting and talking, it felt… fake. I found myself scrutinizing the makeup, looking for flaws and finding them. At the time I attributed it to a cultural bias: because I grew up in an era when high quality entertainment came in the form of 24p films, and cheesy soap operas were 60i video, I must subconsciously associate high frame rates with low quality.

But what if there’s more to it than that?

In a recent interview about Pearl, Patrick Osborne pointed out that simplifying the visual style, removing texture and detail, leaves room for the audience to put themselves into the characters. It lowers a barrier to empathy. Scott McCloud said as much in Understanding Comics. This is why I’ve always preferred non-photorealism over realism. It’s what you leave out that counts.

What if a similar mechanic is at work around the question of frame rates?  The secondhand report from the live action VR filmmaker was that at 60fps, it felt too obvious that the people were actors. You could look at a background character and tell instantly that they were pretending. Leaving aside the possibility that it may have just been bad acting: is it possible that the high frame rate itself lets you see through the ruse?  Could it be the density of information you’re receiving that pushes your perceptiveness over some threshold, and makes you a sharper lie detector?

And if that turns out to be the case: how should filmmakers respond?

On the rules for VR

SIGGRAPH attendees are a sophisticated audience, so demoing Pearl in the VR Village last week led to some really interesting conversations.  One thing I heard more than once was this idea that to do storytelling in VR, we have to throw out all the rules of traditional cinema. While I appreciate the swashbuckling spirit of that sentiment, I don’t think it’s actually true.

drawing by Elizabeth Floyd
drawing by Elizabeth Floyd

I had a life drawing instructor in college who used to teach us rules like “highlights are circular, and shadows are triangular.” As a math major, this really bothered me at the time, because taken literally it was provably false– just give me a flashlight and a grapefruit and I’ll show you! But that was missing his point. The human body is made of smooth, convex masses, and the highlights on them do indeed tend to be round. And when one limb casts a shadow on another, the contour of the shadow’s edge wraps around and hits the silhouette at an angle, forming a sharp point. In other words, “triangular”. So my teacher’s rule, within the context of human figure drawing, was totally valid and actually pretty insightful. But it wasn’t a law of nature, it was something he invented. And to construct it, he had to synthesize knowledge from human anatomy, physics, geometry, and visual perception.

The rules of filmmaking seem atomic and universal to us, but they’re not. Like the “triangular/circular” rule, they’re chimaeras, hybrid creatures assembled from bits of wisdom from different disciplines. They’re not real the way math and biology are real, we’re just so used to them that we mistake them for reality.

illustration of the 180-degree ruleFor example, take film’s 180º rule. That’s the rule that says if you’re shooting a conversation between two characters, there’s an imaginary line connecting them, and when you cut from shot to shot, you always have to keep the camera on the same side of it. Cross that line, and you risk confusing your audience. This rule has elements of geometry (projecting 3D space to a plane), perception (how humans construct mental models of 3D space) and psychology (how we organize those models based on relationships between people). That’s a lot of moving parts! Now imagine trying to apply this rule to a VR experience where you can walk around the scene. Some of those elements change (the flat screen becomes a volume) but the perception and psychology parts are still there. So the question is not whether to keep the 180º rule or throw it away.  The question to ask is which parts do we keep, and what else do we add into the mix, to construct a new rule that works for VR?

For VR storytelling, we shouldn’t have to throw out the rules of the mediums we know and love. But we can unpack them, dismantle them into their component parts, and analyze them at a deeper level than we’re used to doing. And that’s going to be a fun way to spend the next few years, for all of us.

Inceptionism and learning envy

Inceptionist squirrel is watching you!
Inceptionist squirrel is watching you!

A few days ago the image above started going around the social networks, attributed to “a friend working on AI”. Apparently a deliberate leak, now we know where it came from: a research group at Google working on neural networks for image recognition. Their research is worth reading about, and the images are fascinating: Inceptionism.

I have to admit, machine learning makes me jealous. Why does the machine get to do the learning? Why can’t I do the learning? But projects like this make me feel a little better. When the black box opens and we see the writhing mass inside, we get to learn how the machine does the learning. Everyone wins.

And the machines still have some catching up to do. As soon as I saw the amazing gallery of inceptionist images, I recognized the “inceptionist style” from the mysterious grey squirrel. Could a neural network do that?

Fun with Pseudocolor, Part Two

A more perceptually-uniform, if less pretty, pseudocolor scheme.
A more perceptually-uniform, though arguably less pretty, pseudocolor scheme.

Inspired by this brilliant interactive demo of the perceptually uniform CIE L*a*b* color space, I decided to try a L*a*b* version of my pseudocolor scheme. I don’t find this version as pretty to look at, but it has the advantage that higher values are always mapped to colors that are perceptually brighter than lower values. In other words, if you squint at the image above, the bright and dark regions correspond pretty much exactly to what you’d see if it were greyscale. (For the L*a*b* to RGB conversion, I grabbed pseudocode from this handy page.)

L*a*b* space is much bigger than RGB space, so the spiral gets clipped against the edge of the cube in some places.
L*a*b* space is much bigger than RGB space, so the spiral gets clipped by the sides of the cube in a few places.

If you crank up the saturation, you do get more vivid colors, at the cost of a lot more clipping.
If you crank up the saturation, you do get more vivid colors, at the cost of a lot more clipping.

Meta-Perceptual Helmets

hammerhead

Cleary Connolly, an artist in Ireland, has built a lovely series of perception-altering helmets, including the “Hammerhead” (above), which is effectively a head-mounted, lens-based version of the Telestereoscope. The helmets look beautifully crafted and durable. The site has some great photos of the work in progress, and don’t miss the drawings of what they’ve got planned for the next helmets in the series. (I especially love the Siamese Helmet concept– great fun!)

Thanks to Sasha Magee for the link.

Eyeteleporter and Pinhole Selfies

From a planet not far from the Telestereoscope, two projects have just entered our universe…

The Eyeteleporter, a wearable cardboard periscope that displaces your vision about two feet in any direction:

eyeteleport_montage

And Pinhole Selfies, a delightful mashup of retro tech with millennial idiom:

camera

h007

Hat tip to Brock Hanson for the links.

Update: I just realized that the pinhole selfie photographer, Ignas Kutavicius, is the same fellow who invented the amazing solargraphy technique of capturing the sun’s movement with a long exposure pinhole camera. Brilliant!

solargraph1

A homegrown telestereoscope

Once in a while I get a random email from someone interested in checking out the Telestereoscope. If they’re local, I usually direct them to the CuriOdyssey museum, where we have a small one installed. But lately I’ve been encouraging anyone who’s interested to try building one for themselves. Our first prototype cost just a few dollars in materials, and can be put together in minutes. (Calibrating it takes a bit longer, but the process is educational, and ultimately quite rewarding.)

Here’s a working telestereoscope built by Will Rogers. He used metal C-clamps and some very interestingly shaped mirrors (maybe reclaimed from an old car?) giving his version a really distinctive style. I love it!

spozbo's telestereoscope

A camera lucida for all my friends!

My old friend Golan Levin and his collaborator Pablo Garcia have updated the camera lucida for the 21st century. Being a big fan of arcane optical devices, I had to have one. But this would be good for anyone who draws from life or is interested in learning to do so. Judging by the breathtaking rate at which this project’s getting backed, you probably have only a few hours to jump in. Back it here!