Brian Eno has taken many musical forms: producer, technologist, glam-rock star. Eno, the new documentary about the musician, also takes many forms, though more literally. Each showing of the movie, which opens today in New York at Film Forum, will be a different version. It is, according to its makers, “the first generative feature film,” meaning pieces of it will change shape and structure per viewing, thanks to some clever software ingenuity designed by director Gary Hustwit and his partner Brendan Dawes.
While Eno may be more famous as a band member of Roxy Music or producing David Bowie’s Berlin trio, the form of this documentary fits its subject: Eno himself has been making generative art for decades now, since avant-grade minimalism of 1975’s Discreet Music to the mutating soundtrack of Spore.
This is different from generative ai, though, which uses massive training models to infer what it should spit out. Eno is crafted through 30 hours of interviews and 500 hours of film — a curated and ethically sourced data set — with certain pieces weighted to be more likely to appear. Basically, it follows a set of rules and logic written by Hustwit and Dawes. According to The New York Times, there are 52 quintillion possible versions of Eno. The two that I saw were immensely satisfying, with lots of overlap between each.
Eno is an unexpected documentary in other ways, too. You might expect a movie with so many possibilities to be broad, yet the scope remains narrow. Rather than taking a sweeping look at the musician’s long career, it expounds on his philosophies about creativity. The film deploys some great archival footage, but there are no other talking heads (although there is plenty of Talking Heads). It’s an approach you might expect from filmmaker Hustwit, best known for Helvetica, a doc that takes the seemingly niche topic of a single typeface and expresses how wide-reaching its design influence has been.
But the ambitions of Eno are greater than the film itself. Hustwit wants to see a future exploring generative filmmaking using Brain One, the name of the software behind Eno (as well as a piece of hardware designed by Teenage Engineering). The director spoke to The Verge about his ever-changing documentary, the bespoke patent-pending technology that fueled its creation, and how the movie inadvertently met this ai moment and did something more ambitious by thinking smaller about generative art.
The interview has been edited and condensed.
I’ll just start with the obvious one. Why Brian Eno?
I was lucky enough to have him do the soundtrack for my previous film Rams, about the German designer Dieter Rams. It was in 2017 when I was working with Brian, and I was just asking him, “Well, why isn’t there a documentary about you? Why isn’t there a career-spanning, epic documentary about you, Brian?” And he was like, “Ah, I hate documentaries. I’ve turned down so many people. I hate bio documentaries. And it’s always one person’s version of another person’s story, and I didn’t want to be someone else’s story.”
And around that same time, I was having these thoughts about, well, why can’t showing a film be more performative? Why does it have to be this static thing every time? I was working with my friend Brendan Dawes, who’s an amazing digital artist and coder, trying to experiment with what a generative film could be. Could you have a film that was made in software, dynamically, that was different every time but still had a storyline and felt like any of my other films, except that I would be surprised every time it played, too, just like the audience?
We were experimenting with that and very quickly realized, well, Brian would be the perfect subject for this approach. We showed him a very early demo of the generative software system that we created, and he loved it. He was just like, “This is exactly what I want to do.”
You get the textures of his reluctance in the scene when he’s going through the notebooks.
Totally. That happened several times. What is in the film from that notebook session is a very tame version of that annoyance because I think he got more annoyed.
That’s always been his thing. He’s not nostalgic. He doesn’t want to think about the past. He wants to just keep looking forward. And I think he’s always felt that dwelling on his past work just puts him in a creative rut, and he just wants to keep focusing on what’s next. For 50 years, people have been asking him about David Bowie, the Talking Heads, and Roxy Music. He is tired of talking about it and has done so much since then.
I wanted the film to be about creativity and about his creative process and learning from that. Almost any piece of the film, there is some kind of creative lesson there. The majority of the footage in the scenes, even if he’s talking about Roxy or talking about synthesizers or anything, there is some grain of creative inspiration in it.
You were saying he’s sort of reluctant to talk about the Bowie days, but you do get it out of him.
He gets around to talking about it, but you can’t just go, point-blank, like, “So what was it like in Berlin with Bowie in ‘75?” He’ll be like, “Next question.” He just won’t do it. We talked for hours and hours and hours every day about this stuff, so you’re not seeing the two hours that we were talking before that led to the Bowie stuff. So that’s part of the documentary filmmaking process. Obviously, there’s so much that you don’t see.
But the one thing that’s cool with this generative approach is you can put a lot of things in there that you might not see if you watch the film three, four, or five times. In a way, it’s kind of like the cutting room floor gets to still be in there, but maybe it doesn’t have as much priority as other scenes or other footage that might come up. But it is an interesting way to approach a large amount of footage and present it in a concise way each time.
We could make a 10-hour series about Brian, and we still wouldn’t be scratching the surface of everything he’s done. So, again, this is a way that we can sort of do that but also continue to add things to the system. I just added a bunch of footage this past week that’s going into the Film Forum week two runs, which has never been in the system before. So, it’s like it doesn’t ever have to be finished. We can keep adding things to it and increasing the variety and seeing what the juxtapositions are and just keep evolving it.
So, it’s like a living document in a way.
Exactly. Again, why do films have to be these static, fixed things? Why can’t they be these fluid, storytelling structures that you could keep adding things to and keep revising? Yeah, it’s always been this constraint of the medium, that now, when everything is digital, there’s no physical media that we’re dealing with with film. So, why are we still held by the same constraints as 130 years ago when the media was born?
I’m curious how this system works. What kind of software are you using? How does it get structured or compiled? It seems modular.
The system is bespoke. It’s a proprietary system that Brendan and I have been working on for almost five years. It’s interesting how the technology as a whole — of generative software and ai — has continued to evolve pretty radically in the same time.
We have a patent pending on the system, and we just launched a startup called Anamorph that is basically exploring this idea further with other filmmakers and studios and streamers. We’re having a lot of conversations about, Okay, well, what else could we do here? What could a generative fiction film be? Could you have a Marvel film that’s different every time that it screens? What are the technical but also creative ideas around this technology?
Eno can continue to evolve, and we will keep evolving the software, too. The versions that we showed at Sundance six months ago and the versions that we’ll show at Film Forum have, in some ways, subtle but other ways bigger, improvements from that first gen. We get to keep digging into the footage and bringing new things into it, but we also get to keep changing the software. And I don’t know, in a year from now, what the film will look like or what the streaming versions of it will be.
Does the software have a name?
We call it Brain One, which is an anagram for Brian Eno.
We also collaborated with Teenage Engineering to build this generative film machine that we also call Brain One to use when we create the film live in theater. We use this beautiful aluminum box with 35-millimeter film reel, abstracted film reels moving, and all this other cool stuff. And all the functionality of the software is mapped to hardware controls.
So, it’s like… DJing a movie?
The system can make the film in real time, but I’m not really DJing it or anything. I’m kind of overseeing what’s happening with the software. I’m there as sort of a safety net and also doing some audio mixing while it’s happening. I can make all kinds of interventions, but a big part of this whole approach is that it’s not about me, the filmmaker. It’s not about what I think is the best version or my subjective take on what the film should be.
It seems like, though, you could do it in a way where you’re reacting to an audience — if they like footage of Roxy Music, give them more of that. But do you think that’s a powerful part, or do you actually think the randomness is the more interesting part?
I think the randomness is the more interesting part because, for me, I’m learning things. Maybe I haven’t seen two pieces of footage back to back before, and I’m making new connections about Brian.
Here’s also the thing: all this stuff is possible, and it could be completely interactive. I could just talk to the audience beforehand, like, “What do you guys want to see? Do you want to see more music, more talking, more ideas?” You could skew it any way you want it to or let the audience skew it. But in these first generations of the film, this is how we’re executing it.
When you say the word “generative,” the next word people think of now is “ai.” I feel like we used to think of “art.” How deliberately are you meeting this ai moment, or how much was that top of mind as you were making this movie?
It kind of wasn’t top of mind. We want to make a film that’s different. We want it to feel like a cinematic documentary that I would normally make. We just wanted it to be different every time. It wasn’t about disrupting the film industry or film criticism or streaming or any of this other stuff.
Everything around OpenAI, the ai boom in the past two years, really happened around us as we were doing the project. I think that the capabilities that are evolving now with ai, in general, are things that we are looking at in other platforms. Eno and Brain One feel so custom to this idea. The data set is all our material. We didn’t train the platform on other people’s documentaries. There’s not a model that was built around other people’s work. We programmed it with our knowledge as filmmakers about how to tell a cinematic story. And then the actual filmmaking part and the creativity around what the content is, is as important as the coding and the generative software making it each time.
So, I think that a lot of times now, ai, it’s like a land grab. Everybody’s just out there just grabbing any kind of thing they can, and people sort of feel powerless. I think of Brain One and what we created for Eno as more like gardening. We’ve got our material, our landscape that we’re making around this film, but it feels very like a closed system. We’re really just using the technology on our own stuff.
ai is technology that you could use in a lot of different ways. Yes, there are tons of companies using it in other ways, maybe less ethical ways right now. But you can also use it on your own stuff in a completely ethical way, and I think that’s what our approach is then.
I feel like Eno is exploring this question of what creativity is and what that process looks like for different people. The way OpenAI talks about what a large language model generates is, by definition, incurious about creativity. It’s like, what if we just spit stuff out and you have no idea where it came from?
Yeah, I think that’s accurate. (Laughs)
Have you gotten any pushback at all? I just know there are people who are sensitive to anything ai-related.
People are, yeah. Until they watch the film.
It’s mostly within the filmmaking world, like editors and cinematographers and people that are doing this as a craft. From what they hear about it, people go in with some preconception. And then once they see the film, they just really want to talk to me about what else could happen with this and where can it go from here.
Even when I describe the film people, I’m like, “It’s different every time.” They don’t get it. And when they see it, they’re like, “Wow, this worked great for Eno, but I can’t see it working for anything else.”
A big part of what Anamorph is going to be doing this year is making demos of different ways to use this idea in narrative films or installations. There’s just all sorts of other applications for it. But I feel like until you understand the capability, it’s hard for other filmmakers to think of creative ideas that could work with it. It’s a little bit of a chicken and egg thing.
But I think if there are filmmakers that talk to you about it and others that, once they understand what the capabilities are, they can go, “Oh yeah, there’s a story that I actually think could work with this.” That’s the kind of stuff that we’ll be making here in the future.