A crowd gathered at the MIT Media Lab in September for a concert by musician Jordan Rudess and two collaborators. One of them, violinist and vocalist Camilla Bäckman, has previously performed with Rudess. The other, an ai model informally called jam_bot, which Rudess developed with a team at MIT over the previous few months, was making its public debut as a work in progress.
Throughout the show, Rudess and Bäckman exchanged signs and smiles of seasoned musicians finding rhythm together. Rudess's interactions with the jam_bot suggested a different, unknown type of exchange. During a Bach-inspired duet, Rudess alternated between playing a few bars and allowing the ai to continue the music in a similar baroque style. Each time the model took her turn, a variety of expressions crossed Rudess's face: bewilderment, concentration, curiosity. At the end of the piece, Rudess admitted to the audience, “That's a combination of a lot of fun and a real challenge.”
Rudess is an acclaimed keyboardist, the greatest of all time, according to a Music Radar magazine poll, known for his work with the Grammy-winning and platinum-selling progressive metal band Dream Theater, which ships this fall on a tour for his 40th anniversary. He is also a soloist whose latest album, “Permit to fly”, was released on September 6; an educator who shares his skills through detailed online tutorials; and founder of the software company Wizdom Music. His work combines a rigorous classical foundation (he began piano studies at The Juilliard School at age 9) with a genius for improvisation and an appetite for experimentation.
Last spring, Rudess became a visiting artist at MIT's Center for Art, Science and technology (CAST), collaborating with the MIT Media Lab's Responsive Environments research group on creating new ai-powered music technology. Rudess's main collaborators in the company are Media Lab graduate students Lancelot Blanchard, who researches musical applications of generative ai (informed by his own studies in classical piano), and Perry Naseck, an artist and engineer specializing in interactive music , kinetic, luminous and time-based media. Overseeing the project is Professor Joseph Paradiso, head of the Responsive Environments group and a long-time Rudess fan. Paradiso came to Media Lab in 1994 with a CV in physics and engineering and a side hustle designing and building synthesizers to explore his avant-garde musical tastes. His group has a tradition of investigating musical frontiers through novel user interfaces, sensor networks, and unconventional data sets.
The researchers set out to develop a machine learning model that would channel Rudess' distinctive musical style and technique. in a paper Published online by MIT Press in September, co-authored with MIT music technology professor Eran Egozy, they articulate their vision of what they call “symbiotic virtuosity”: that humans and computers duet in real time, learning from each duet they they do together. and create new music worthy of performance in front of a live audience.
Rudess provided the data with which Blanchard trained the ai model. Rudess also provided ongoing testing and feedback, as Naseck experimented with ways to visualize the technology for the audience.
“Audiences are used to seeing lighting, graphics and stage elements at many concerts, so we needed a platform that would allow ai to build its own relationship with the audience,” says Naseck. In early demos, this took the form of a sculptural installation with lighting that changed every time the ai changed chords. During the September 21 concert, a grid of petal-shaped panels mounted behind Rudess came to life through activity-based choreography and the future generation of the ai model.
“If you see jazz musicians making eye contact and nodding their heads, the audience anticipates what is going to happen,” Naseck says. “The ai effectively generates scores and then plays them. How do we show what's coming next and communicate it?
Naseck designed and programmed the structure from scratch in the Media Lab with the help of Brian Mayton (mechanical design) and Carlo Mandolini (fabrication), extracting some of its movements from an experimental machine learning model developed by visiting student Madhav Lavakare who assigns music to points. moving in space. With the ability to rotate and tilt its petals at speeds ranging from subtle to dramatic, the kinetic sculpture distinguished the ai's contributions during the concert from those of the human performers, while also conveying the emotion and energy of its production: swaying gently as Rudess took the lead, for example, or coiling and unfolding like a flower as the ai model generated majestic chords for an improvised adagio. The latter was one of Naseck's favorite moments of the show.
“In the end, Jordan and Camilla left the stage and allowed the ai to fully explore its own direction,” he recalls. “The sculpture made this moment very powerful: it allowed the stage to remain animated and intensified the grandiose nature of the chords the ai was playing. The audience was clearly captivated by this part, sitting on the edges of their seats.”
“The goal is to create a musical visual experience,” says Rudess, “to show what's possible and improve the game.”
Musical futures
As a starting point for his model, Blanchard used a music transformer, an open-source neural network architecture developed by MIT assistant professor Anna Huang SM '08, who joined the MIT faculty in September.
“Music transformers work in a similar way to large language models,” explains Blanchard. “In the same way that ChatGPT would generate the next most likely word, the model we have would predict the next most likely notes.”
Blanchard fine-tuned the model using Rudess' own interpretation of elements, from bass lines to chords and melodies, variations of which Rudess recorded in his New York studio. Along the way, Blanchard made sure the ai was agile enough to respond in real time to Rudess's improvisations.
“We reframed the project,” Blanchard says, “in terms of musical futures that the model was hypothesizing and that were only being realized at that time based on what Jordan was deciding.”
As Rudess says: “How can ai respond? How can I talk to her? “That’s the cutting-edge part of what we’re doing.”
Another priority emerged: “In the field of generative ai and music, you hear about startups like Suno or Udio that are capable of generating music based on text messages. They are very interesting, but they lack controllability,” says Blanchard. “It was important for Jordan to be able to anticipate what was going to happen. “If I could see that the ai was going to make a decision I didn’t want, I could restart the build or have a kill switch so I could take control again.”
In addition to giving Rudess a screen previewing the model's musical decisions, Blanchard incorporated different modalities that the musician could activate while playing: prompting the ai to generate chords or lead melodies, for example, or initiating a calling pattern. and response. .
“Jordan is the brain of everything that is happening,” he says.
What would Jordan do?
Although the residency has concluded, the collaborators see many avenues to continue the research. For example, Naseck would like to experiment with more ways Rudess could interact directly with your installation, through features like capacitive sensing. “We hope that in the future we can work with more subtle movements and postures,” says Naseck.
While the MIT collaboration focused on how Rudess can use the tool to improve its own performance, it's easy to imagine other applications. Paradiso recalls an early encounter with the technician: “I played a chord sequence and Jordan's model generated the tracks. “It was like having a Jordan Rudess musical 'bee' buzzing around the melodic foundation he was laying down, doing something like Jordan would do, but subject to the simple progression he was playing,” he recalls, his face reflecting the delight he felt. At the moment. “You'll see ai plugins for your favorite musician that you can incorporate into your own compositions, with a few knobs that let you control the details,” he says. “It's that kind of world we're opening up with this.”
Rudess is also interested in exploring educational uses. Because the samples he recorded to train the model were similar to auditory training exercises he has used with students, he believes the model itself could one day be used for teaching. “This work goes beyond just entertainment value,” he says.
The foray into artificial intelligence is a natural progression of Rudess's interest in music technology. “This is the next step,” he believes. However, when he talks about work with other musicians, his enthusiasm for ai is often met with resistance. “I can feel sympathy or compassion for a musician who feels threatened, I totally understand that,” he admits. “But my mission is to be one of the people who pushes this technology towards positive things.”
“At the Media Lab, it's very important to think about how ai and humans come together for the benefit of everyone,” says Paradiso. “How is ai going to help us all? Ideally, it will do what so many technologies have done: take us to another perspective in which we are more capable.”
“Jordan is at the helm,” adds Paradiso. “Once you get established with it, people will follow you.”
Playing with MIT
The Media Lab first landed on Rudess' radar before her residency because she wanted to try out the woven keyboard created by another Responsive Environments member, textile researcher Irmandy Wickasono PhD '24. From that moment on, “it's been a discovery for me to learn about the interesting things that are happening at MIT in the world of music,” Rudess says.
During two visits to Cambridge last spring (with the assistance of his wife, theater and music producer Danielle Rudess), Rudess reviewed the final projects of Paradiso's course on electronic music controllers, whose syllabus included videos of his own past performances. He brought a new gesture-based synthesizer called Osmose to a class on interactive music systems taught by Egozy, whose credits include co-creating the video game “Guitar Hero.” Rudess also gave tips on improvisation in a composition class; played GeoShred, a touch-screen musical instrument he co-created with researchers at Stanford University, with music students in the MIT Laptop Ensemble program, and Arts Scholars; and experienced immersive audio at the MIT Spatial Sound Lab. During his most recent trip to campus in September, he taught a master class for pianists in MIT's Emerson/Harris Program, which provides a total of 67 scholars and fellows with instructional support music at conservatory level.
“I get a kind of rush every time I come to college,” Rudess says. “I feel like, wow, all my musical ideas and inspiration and interests have come together in this really cool way.”