Last week, ai startup Luma published a series of videos created using its new Dream Machine video generation tool, which the company describes as a “highly scalable and efficient transformative model trained directly on videos.”
The only problem? At approximately 57 seconds, the Dream Machine generated x.com/LumaLabsAI/status/1800921393321934915″>trailer for monster camp – an animated story about furry creatures traveling to a sleepaway camp – features Mike Wazowski, slightly tainted by the ai but still recognizable, from the Pixar film. Monsters Inc. Many people noticed that several characters and their overall aesthetic look borrowed from the franchise, and questions quickly began to pour in.
Did you receive a message requesting Pixar-style animation? Are you trained on material that includes Disney studio work? That general lack of transparency is one of the biggest concerns about this type of model, as Dream Machine joins OpenAI's Sora, Google Video Poetand I see it as one of the many text-to-video ai tools shown off in recent months.
Luma touted its Dream Machine model as the future of cinema, featuring “high-quality, realistic shots” created by simply writing prompts on a box. By watching videos showing cars racing down a dissolving road or a strangely narrated sci-fi short film, you can understand why optimistic fans of this technology were quick to call it a novel innovation.
Currently, Luma encourages people to sign up and play with Dream Machine for free, but the company also has x.com/lumalabsai/status/1802681173682032983?s=46″>“Pro” and other levels that charge fees to users for more features. We reached out to Luma for comment on where it sources the footage Dream Machine trains on, but did not hear back by the time of publication.
Disney hasn't publicly commented on what Luma appears to be doing, and the company may not even have noticed. But at a time when people have been pushing for more transparency about the data sets that power ai tools like the ones Luma is building, things like monster camp make it difficult not to view the generative ai ecosystem as prone to plagiarism.