Can generative ai designed for the enterprise (e.g., ai that auto-completes reports, spreadsheet formulas, etc.) ever be interoperable? Along with a circle of organizations like Cloudera and Intel, the Linux Foundation, the nonprofit that supports and maintains a growing number of open source efforts, aims to find out.
The Linux Foundation today ai–data-foundation-launches-open-platform-for-enterprise-ai-opea-for-groundbreaking-enterprise-ai-collaboration-302117979.html”>Announced the launch of the Open Platform for Enterprise ai (OPEA), a project to encourage the development of open, multi-vendor and composable (i.e. modular) generative ai systems. Under the purview of the Linux Foundation's LFAI and Data organization, which focuses on data and ai-related platform initiatives, OPEA's goal will be to pave the way for the launch of “hardened” and “generative ai systems.” scalable” that “leverage the best open source innovation from across the ecosystem,” LFAI and Data CEO Ibrahim Haddad said in a press release.
“OPEA will unlock new possibilities in ai by creating a detailed, composable framework that sits at the forefront of technology stacks,” Haddad said. “This initiative is a testament to our mission to drive open source innovation and collaboration within the data and ai communities under an open and neutral governance model.”
In addition to Cloudera and Intel, OPEA, one of the Linux Foundation's Sandbox projects, an incubation program of sorts, counts among its members business heavyweights such as Intel, IBM-owned Red Hat, Hugging Face, Domino Data Lab, MariaDB and VMWare.
So what exactly could you build together? Haddad hints at some possibilities, such as “optimized” support for ai compilers and toolchains, allowing ai workloads to run on different hardware components, as well as “heterogeneous” pipelines for augmented recovery generation ( RAG).
RAG is becoming increasingly popular in enterprise generative ai applications and it's not hard to see why. The responses and actions of most generative ai models are limited to the data they are trained on. But with RAG, a model's knowledge base can be expanded to information outside of the original training data. RAG models reference this external information (which can take the form of a company's proprietary data, a public database, or some combination of the two) before generating a response or performing a task.
Intel offered a few more details of its own ai.html#:~:text=OPEA%20aims%20to%20accelerate%20secure,%2Daugmented%20generation%20(RAG).”>Press release:
Enterprises are challenged by a do-it-yourself (RAG) approach because there are no de facto standards among components that allow them to choose and implement RAG solutions that are open and interoperable and that help them quickly get to market. OPEA intends to address these issues by collaborating with industry to standardize components, including frameworks, architectural drawings, and reference solutions.
Evaluation will also be a key part of what OPEA addresses.
On your GitHub repositoryOPEA proposes a rubric to rate generative ai systems along four axes: performance, features, reliability, and “enterprise-grade” readiness. Performance as defined by OPEA, it refers to “black box” benchmarks of real-world use cases. Characteristics It is an evaluation of a system's interoperability, implementation options, and ease of use. Reliability analyzes the ability of an ai model to guarantee “robustness” and quality. AND business preparation focuses on the requirements for a system to be up and running without major problems.
Rachel Roumeliotis, director of open source strategy at Intel, ai.html”>says that OPEA will work with the open source community to offer rubric-based testing and will provide assessments and ratings of generative ai implementations on demand.
The other OPEA projects are a bit up in the air at the moment. But Haddad raised the potential of developing open models along the lines of Meta's growing Llama family and Databricks' DBRX. To that end, in the OPEA repository, Intel has already contributed reference implementations for a generative ai-powered chatbot, document summarizer, and code generator optimized for its Xeon 6 and Gaudi 2 hardware.
Now, OPEA members are clearly committed (and interested, indeed) in building tools for enterprise generative ai. Cloudera recently partnerships launched to create what it is presenting as an “ai ecosystem” in the cloud. Domino offers a ai-including-genai-responsibly-301972425.html”>application suite to build and audit enterprise generative ai. And VMWare, geared toward the infrastructure side of enterprise ai, launched last August. ai-foundation”>new “private ai” computing products.
The question is, according to OPEA, do these suppliers in fact Work together to create mutually compatible ai tools?
There is an obvious benefit to doing so. Customers will be happy to use multiple suppliers depending on their needs, resources and budgets. But history has shown that it is very easy to lean towards supplier lock-in. Let's hope that's not the end result here.