The rapid advancements in artificial intelligence and machine learning (ai/ML) have made these technologies a transformative force across industries. According to a ai-the-next-productivity-frontier#introduction” target=”_blank” rel=”noopener”>McKinsey study, across the financial services industry (FSI), generative ai is projected to deliver over $400 billion (5%) of industry revenue in productivity benefits. As maintained by ai-apis-or-deployed-generative-ai-enabled-applications-by-2026″ target=”_blank” rel=”noopener”>Gartner, more than 80% of enterprises will have ai deployed by 2026. At amazon, we believe innovation (rethink and reinvent) drives improved customer experiences and efficient processes, leading to increased productivity. Generative ai is a catalyst for business transformation, making it imperative for FSI organizations to determine where generative ai’s current capabilities could deliver the biggest value for FSI customers.
Organizations across industries face ai-2023.html#:~:text=The%20respondents%2C%20who%20are%20from,81%25%20and%2078%25%20respectively.” target=”_blank” rel=”noopener”>numerous challenges implementing generative ai across their organization, such as lack of clear business case, scaling beyond proof of concept, lack of governance, and availability of the right talent. An effective approach that addresses a wide range of observed issues is the establishment of an ai/ML center of excellence (CoE). An ai/ML CoE is a dedicated unit, either centralized or federated, that coordinates and oversees all ai/ML initiatives within an organization, bridging business strategy to value delivery. As observed by Harvard Business Review, an ai/ML CoE is already established in ai-center-of-excellence” target=”_blank” rel=”noopener”>37% of large companies in the US. For organizations to be successful in their generative ai journey, there is growing importance for coordinated collaboration across lines of businesses and technical teams.
This post, along with the Cloud Adoption Framework for ai/ML and Well-Architected Machine Learning Lens, serves as a guide for implementing an effective ai/ML CoE with the objective to capture generative ai’s possibilities. This includes guiding practitioners to define the CoE mission, forming a leadership team, integrating ethical guidelines, qualification and prioritization of use cases, upskilling of teams, implementing governance, creating infrastructure, embedding security, and enabling operational excellence.
What is an ai/ML CoE?
The ai/ML CoE is responsible for partnering with lines of business and end-users in identifying ai/ML use cases aligned to business and product strategy, recognizing common reusable patterns from different business units (BUs), implementing a company-wide ai/ML vision, and deploying an ai/ML platform and workloads on the most appropriate combination of computing hardware and software. The CoE team synergizes business acumen with profound technical ai/ML proficiency to develop and implement interoperable, scalable solutions throughout the organization. They establish and enforce best practices encompassing design, development, processes, and governance operations, thereby mitigating risks and making sure robust business, technical, and governance frameworks are consistently upheld. For ease of consumption, standardization, scalability, and value delivery, the outputs of an ai/ML CoE can be of two types: guidance such as published guidance, best practices, lessons learned, and tutorials, and capabilities such as people skills, tools, technical solutions, and reusable templates.
The following are benefits of establishing an ai/ML CoE:
- Faster time to market through a clear path to production
- Maximized return on investments through delivering on the promise of generative ai business outcomes
- Optimized risk management
- Structured upskilling of teams
- Sustainable scaling with standardized workflows and tooling
- Better support and prioritization of innovation initiatives
The following figure illustrates the key components for establishing an effective ai/ML CoE.
In the following sections, we discuss each numbered component in detail.
1. Sponsorship and mission
The foundational step in setting up an ai/ML CoE is securing sponsorship from senior leadership, establishing leadership, defining its mission and objectives, and aligning empowered leadership.
Establish sponsorship
Establish clear leadership roles and structure to provide decision-making processes, accountability, and adherence to ethical and legal standards:
- Executive sponsorship – Secure support from senior leadership to champion ai/ML initiatives
- Steering committee – Form a committee of key stakeholders to oversee the ai/ML CoE’s activities and strategic direction
- Ethics board – Create a board to address ethical and responsible ai considerations in ai/ML development and deployment
Define the mission
Making the mission customer- or product-focused and aligned with the organization’s overall strategic goals helps outline the ai/ML CoE’s role in achieving them. This mission, usually set by the executive sponsor in alignment with the heads of business units, serves as a guiding principle for all CoE activities, and contains the following:
- Mission statement – Clearly articulate the purpose of the CoE in advancing customer and product outcomes applying ai/ML technologies
- Strategic objectives – Outline tangible and measurable ai/ML goals that align with the organization’s overall strategic goals
- Value proposition – Quantify the expected business value Key Performance Indicators (KPIs) such as cost savings, revenue gains, user satisfaction, time savings, and time-to-market
2. People
According to a Gartner report, 53% of business, functional, and technical teams rate their technical acumen on generative ai as “Intermediate” and 64% of senior leadership rate their skill as “Novice.” By developing customized solutions tailored to the specific and evolving needs of the business, you can foster a culture of continuous growth and learning and cultivate a deep understanding of ai and ML technologies, including generative ai skill development and enablement.
Training and enablement
To help educate employees on ai/ML concepts, tools, and techniques, the ai/ML CoE can develop training programs, workshops, certification programs, and hackathons. These programs can be tailored to different levels of expertise and designed to help employees understand how to use ai/ML to solve business problems. Additionally, the CoE could provide a mentoring platform to employees who are interested in further enhancing their ai/ML skills, develop certification programs to recognize employees who have achieved a certain level of proficiency in ai/ML, and provide ongoing training to keep the team updated with the latest technologies and methodologies.
Dream team
Cross-functional engagement is essential to achieve well-rounded ai/ML solutions. Having a multidisciplinary ai/ML CoE that combines industry, business, technical, compliance, and operational expertise helps drive innovation. It harnesses the 360 view potential of ai in achieving a company’s strategic business goals. Such a diverse team with ai/ML expertise may include roles such as:
- Product strategists – Make sure all products, features, and experiments are cohesive to the overall transformation strategy
- ai researchers – Employ experts in the field to drive innovation and explore cutting-edge techniques such as generative ai
- Data scientists and ML engineers – Develop capabilities for data preprocessing, model training, and validation
- Domain experts – Collaborate with professionals from business units who understand the specific applications and business need
- Operations – Develop KPIs, demonstrate value delivery, and manage machine learning operations (MLOPs) pipelines
- Project managers – Appoint project managers to implement projects efficiently
Knowledge sharing
By fostering collaboration within the CoE, internal stakeholders, business unit teams, and external stakeholders, you can enable knowledge sharing and cross-disciplinary teamwork. Encourage knowledge sharing, establish a knowledge repository, and facilitate cross-functional projects to maximize the impact of ai/ML initiatives. Some example key actions to foster knowledge sharing are:
- Cross-functional collaborations – Promote teamwork between experts in generative ai and business unit domain-specific professionals to innovate on cross-functional use cases
- Strategic partnerships – Investigate partnerships with research institutions, universities, and industry leaders specializing in generative ai to take advantage of their collective expertise and insights
3. Governance
Establish governance that enables the organization to scale value delivery from ai/ML initiatives while managing risk, compliance, and security. Additionally, pay special attention to the changing nature of the risk and cost that is associated with the development as well as the scaling of ai.
Responsible ai
Organizations can navigate potential ethical dilemmas associated with generative ai by incorporating considerations such as fairness, explainability, privacy and security, robustness, governance, and transparency. To provide ethical integrity, an ai/ML CoE helps integrate robust guidelines and safeguards across the ai/ML lifecycle in collaboration with stakeholders. By taking a proactive approach, the CoE provides ethical compliance but also builds trust, enhances accountability, and mitigates potential risks such as veracity, toxicity, data misuse, and intellectual property concerns.
Standards and best practices
Continuing its stride towards excellence, the CoE helps define common standards, industry-leading practices, and guidelines. These encompass a holistic approach, covering data governance, model development, ethical deployment, and ongoing monitoring, reinforcing the organization’s commitment to responsible and ethical ai/ML practices. Examples of such standards include:
- Development framework – Establishing standardized frameworks for ai development, deployment, and governance provides consistency across projects, making it easier to adopt and share best practices.
- Repositories – Centralized code and model repositories facilitate the sharing of best practices and industry standard solutions in coding standards, enabling teams to adhere to consistent coding conventions for better collaboration, reusability, and maintainability.
- Centralized knowledge hub – A central repository housing datasets and research discoveries to serve as a comprehensive knowledge center.
- Platform – A central platform such as amazon SageMaker for creation, training, and deployment. It helps manage and scale central policies and standards.
- Benchmarking and metrics – Defining standardized metrics and benchmarking to measure and compare the performance of ai models, and the business value derived.
Data governance
Data governance is a crucial function of an ai/ML CoE, such as making sure data is collected, used, and shared in a responsible and trustworthy manner. Data governance is essential for ai applications, because these applications often use large amounts of data. The quality and integrity of this data are critical to the accuracy and fairness of ai-powered decisions. The ai/ML CoE helps define best practices and guidelines for data preprocessing, model development, training, validation, and deployment. The CoE should make sure that data is accurate, complete, and up-to-date; the data is protected from unauthorized access, use, or disclosure; and data governance policies demonstrate the adherence to regulatory and internal compliance.
Model oversight
Model governance is a framework that determines how a company implements policies, controls access to models, and tracks their activity. The CoE helps make sure that models are developed and deployed in a safe, trustworthy, and ethical fashion. Additionally, it can confirm that model governance policies demonstrate the organization’s commitment to transparency, fostering trust with customers, partners, and regulators. It can also provide safeguards customized to your application requirements and make sure responsible ai policies are implemented using services such as Guardrails for amazon Bedrock.
Value delivery
Manage the ai/ML initiative return on investment, platform and services expenses, efficient and effective use of resources, and ongoing optimization. This requires monitoring and analyzing use case-based value KPIs and expenditures related to data storage, model training, and inference. This includes assessing the performance of various ai models and algorithms to identify cost-effective, resource-optimal solutions such as using AWS Inferentia for inference and AWS Trainium for training. Setting KPIs and metrics is pivotal to gauge effectiveness. Some example KPIs are:
- Return on investment (ROI) – Evaluating financial returns against investments justifies resource allocation for ai projects
- Business impact – Measuring tangible business outcomes like revenue uplift or enhanced customer experiences validates ai’s value
- Project delivery time – Tracking time from project initiation to completion showcases operational efficiency and responsiveness
4. Platform
The ai/ML CoE, in collaboration with the business and technology teams, can help build an enterprise-grade and scalable ai platform, enabling organizations to operate ai-enabled services and products across business units. It can also help develop custom ai solutions and help practitioners adapt to change in ai/ML development.
Data and engineering architecture
The ai/ML CoE helps set up the right data flows and engineering infrastructure, in collaboration with the technology teams, to accelerate the adoption and scaling of ai-based solutions:
- High-performance computing resources – Powerful GPUs such as amazon Elastic Compute Cloud (amazon EC2) instances, powered by the latest NVIDIA H100 Tensor Core GPUs, are essential for training complex models.
- Data storage and management – Implement robust data storage, processing, and management systems such as AWS Glue and amazon OpenSearch Service.
- Platform – Using cloud platforms can provide flexibility and scalability for ai/ML projects for tasks such as SageMaker, which can help provide end-to-end ML capability across generative ai experimentation, data prep, model training, deployment, and monitoring. This further helps accelerate generative ai workloads from experimentation to production. amazon Bedrock is an easier way to build and scale generative ai applications with foundation models (FMs). As a fully managed service, it offers a choice of high-performing FMs from leading ai companies including AI21 Labs, Anthropic, Cohere, Meta, Stability ai, and amazon.
- Development tools and frameworks – Use industry-standard ai/ML frameworks and tools such as amazon CodeWhisperer, Apache MXNet, PyTorch, and TensorFlow.
- Version control and collaboration tools – Git repositories, project management tools, and collaboration platforms can facilitate teamwork, such as AWS CodePipeline and amazon CodeGuru.
- Generative ai frameworks – Utilize state-of-the-art foundation models, tools, agents, knowledge bases, and guardrails available on amazon Bedrock.
- Experimentation platforms – Deploy platforms for experimentation and model development, allowing for reproducibility and collaboration, such as amazon SageMaker JumpStart.
- Documentation – Emphasize the documentation of processes, workflows, and best practices within the platform to facilitate knowledge sharing among practitioners and teams.
Lifecycle management
Within the ai/ML CoE, the emphasis on scalability, availability, reliability, performance, and resilience is fundamental to the success and adaptability of ai/ML initiatives. Implementation and operationalization of a lifecycle management system such as MLOps can help automate deployment and monitoring, resulting in improved reliability, time to market, and observability. Using tools like amazon SageMaker Pipelines for workflow management, amazon SageMaker Experiments for managing experiments, and amazon Elastic Kubernetes Service (amazon EKS) for container orchestration enables adaptable deployment and management of ai/ML applications, fostering scalability and portability across various environments. Similarly, employing serverless architectures such as AWS Lambda empowers automatic scaling based on demand, reducing operational complexity while offering flexibility in resource allocation.
Strategic alliances in ai services
The decision to buy or build solutions involves trade-offs. Buying offers speed and convenience by using pre-built tools, but may lack customization. On the other hand, building provides tailored solutions but demands time and resources. The balance hinges on the project scope, timeline, and long-term needs, achieving optimal alignment with organizational goals and technical requirements. The decision, ideally, can be based on a thorough assessment of the specific problem to be solved, the organization’s internal capabilities, and the area of the business targeted for growth. For example, if the business system helps establish uniqueness and then builds to differentiate in the market, or if the business system supports a standard commoditized business process, then buys to save.
By partnering with third-party ai service providers, such as AWS Generative ai Competency Partners, the CoE can use their expertise and experience to accelerate the adoption and scaling of ai-based solutions. These partnerships can help the CoE stay up to date with the latest ai/ML research and trends, and can provide access to cutting-edge ai/ML tools and technologies. Additionally, third-party ai service providers can help the CoE identify new use cases for ai/ML and can provide guidance on how to implement ai/ML solutions effectively.
5. Security
Emphasize, assess, and implement security and privacy controls across the organization’s data, ai/ML, and generative ai workloads. Integrate security measures across all aspects of ai/ML to identify, classify, remediate, and mitigate vulnerabilities and threats.
Holistic vigilance
Based on how your organization is using generative ai solutions, scope the security efforts, design resiliency of the workloads, and apply relevant security controls. This includes employing encryption techniques, multifactor authentication, threat detection, and regular security audits to make sure data and systems remain protected against unauthorized access and breaches. Regular vulnerability assessments and threat modeling are crucial to address emerging threats. Strategies such as model encryption, using secure environments, and continuous monitoring for anomalies can help protect against adversarial attacks and malicious misuse. To monitor the model for threats detection, you can use tools like amazon GuardDuty. With amazon Bedrock, you have full control over the data you use to customize the foundation models for your generative ai applications. Data is encrypted in transit and at rest. User inputs and model outputs are not shared with any model providers; keeping your data and applications secure and private.
End-to-end assurance
Enforcing the security of the three critical components of any ai system (inputs, model, and outputs) is critical. Establishing clearly defined roles, security policies, standards, and guidelines across the lifecycle can help manage the integrity and confidentiality of the system. This includes implementation of industry best practice measures and industry frameworks, such as ai/NIST.ai.100-1.pdf” target=”_blank” rel=”noopener”>NIST, OWASP-LLM, OWASP-ML, MITRE Atlas. Furthermore, evaluate and implement requirements such as Canada’s Personal Information Protection and Electronic Documents Act (PIPEDA) and European Union’s General Data Protection Regulation (GDPR). You can use tools such as amazon Macie to discover and protect your sensitive data.
Infrastructure (data and systems)
Given the sensitivity of the data involved, exploring and implementing access and privacy-preserving techniques is vital. This involves techniques such as least privilege access, data lineage, keeping only relevant data for use case, and identifying and classifying sensitive data to enable collaboration without compromising individual data privacy. It’s essential to embed these techniques within the ai/ML development lifecycle workflows, maintain a secure data and modeling environment, and stay in compliance with privacy regulations and protect sensitive information. By integrating security-focused measures into the ai/ML CoE’s strategies, the organization can better mitigate risks associated with data breaches, unauthorized access, and adversarial attacks, thereby providing integrity, confidentiality, and availability for its ai assets and sensitive information.
6. Operations
The ai/ML CoE needs to focus on optimizing the efficiency and growth potential of implementing generative ai within the organization’s framework. In this section, we discuss several key aspects aimed at driving successful integration while upholding workload performance.
Performance management
Setting KPIs and metrics is pivotal to gauge effectiveness. Regular assessment of these metrics allows you to track progress, identify trends, and foster a culture of continual improvement within the CoE. Reporting on these insights provides alignment with organizational objectives and informs decision-making processes for enhanced ai/ML practices. Solutions such as Bedrock integration with amazon CloudWatch, helps track and manage usage metrics, and build customized dashboards for auditing.
An example KPI is model accuracy: assessing models against benchmarks provides reliable and trustworthy ai-generated outcomes.
Incident management
ai/ML solutions need ongoing control and observation to manage any anomalous activities. This requires establishing processes and systems across the ai/ML platform, ideally automated. A standardized incident response strategy needs to be developed and implemented in alignment with the chosen monitoring solution. This includes elements such as formalized roles and responsibilities, data sources and metrics to be monitored, systems for monitoring, and response actions such as mitigation, escalation, and root cause analysis.
Continuous improvement
Define rigorous processes for generative ai model development, testing, and deployment. Streamline the development of generative ai models by defining and refining robust processes. Regularly evaluate the ai/ML platform performance and enhance generative ai capabilities. This involves incorporating feedback loops from stakeholders and end-users and dedicating resources to exploratory research and innovation in generative ai. These practices drive continual improvement and keep the CoE at the forefront of ai innovation. Furthermore, implement generative ai initiatives seamlessly by adopting agile methodologies, maintaining comprehensive documentation, conducting regular benchmarking, and implementing industry best practices.
7. Business
The ai/ML CoE helps drive business transformation by continuously identifying priority pain points and opportunities across business units. Aligning business challenges and opportunities to customized ai/ML capabilities, the CoE drives rapid development and deployment of high-value solutions. This alignment to real business needs enables step-change value creation through new products, revenue streams, productivity, optimized operations, and customer satisfaction.
Envision an ai strategy
With the objective to drive business outcomes, establish a compelling multi-year vision and strategy on how the adoption of ai/ML and generative ai techniques can transform major facets of the business. This includes quantifying the tangible value at stake from ai/ML in terms of revenues, cost savings, customer satisfaction, productivity, and other vital performance indicators over a defined strategic planning timeline, such as 3–5 years. Additionally, the CoE must secure buy-in from executives across business units by making the case for how embracing ai/ML will create competitive advantages and unlock step-change improvements in key processes or offerings.
Use case management
To identify, qualify, and prioritize the most promising ai/ML use cases, the CoE facilitates an ongoing discovery dialogue with all business units to surface their highest-priority challenges and opportunities. Each complex business issue or opportunity must be articulated by the CoE, in collaboration with business unit leaders, as a well-defined problem and opportunity statement that lends itself to an ai/ML-powered solution. These opportunities establish clear success metrics tied to business KPIs and outline the potential value impact vs. implementation complexity. A prioritized pipeline of high-potential ai/ML use cases can then be created, ranking opportunities based on expected business benefit and feasibility.
Proof of concept
Before undertaking full production development, prototype proposed solutions for high-value use cases through controlled proof of concept (PoC) projects focused on demonstrating initial viability. Rapid feedback loops during these PoC phases allow for iteration and refinement of approaches at a small scale prior to wider deployment. The CoE establishes clear success criteria for PoCs, in alignment with business unit leaders, that map to business metrics and KPIs for ultimate solution impact. Furthermore, the CoE can engage to share expertise, reusable assets, best practices, and standards.
Executive alignment
To provide full transparency, the business unit executive stakeholders must be aligned with ai/ML initiatives, and have regular reporting with them. This way, any challenges that need to be escalated can be quickly resolved with executives who are familiar with the initiatives.
8. Legal
The legal landscape of ai/ML and generative ai is complex and evolving, presenting a myriad of challenges and implications for organizations. Issues such as data privacy, intellectual property, liability, and bias require careful consideration within the ai/ML CoE. As regulations struggle to keep pace with technological advancements, the CoE must partner with the organization’s legal team to navigate this dynamic terrain to enforce compliance and responsible development and deployment of these technologies. The evolving landscape demands that the CoE, working in collaboration with the legal team, develops comprehensive ai/ML governance policies covering the entire ai/ML lifecycle. This process involves business stakeholders in decision-making processes and regular audits and reviews of ai/ML systems to validate compliance with governance policies.
9. Procurement
The ai/ML CoE needs to work with partners, both Independent Software Vendors (ISV) and System Integrators (SI) to help with the buy and build strategies. They need to partner with the procurement team to develop a selection, onboarding, management, and exit framework. This includes acquiring technologies, algorithms, and datasets (sourcing reliable datasets is crucial for training ML models, and acquiring cutting-edge algorithms and generative ai tools enhances innovation). This will help accelerated development of capabilities needed for business. Procurement strategies must prioritize ethical considerations, data security, and ongoing vendor support to provide sustainable, scalable, and responsible ai integration.
10. Human Resources
Partner with Human Resources (HR) on ai/ML talent management and pipeline. This involves cultivating talent to understand, develop, and implement these technologies. HR can help bridge the technical and non-technical divide, fostering interdisciplinary collaboration, building a path for onboarding new talent, training them, and growing them on both professional and skills. They can also address ethical concerns through compliance training, upskill employees on the latest emerging technologies, and manage the impact of job roles that are critical for continued success.
11. Regulatory and compliance
The regulatory landscape for ai/ML is rapidly evolving, with governments worldwide racing to establish governance regimes for the increasing adoption of ai applications. The ai/ML CoE needs a focused approach to stay updated, derive actions, and implement regulatory legislations such as Brazil’s General Personal Data Protection Law (LGPD), Canada’s Personal Information Protection and Electronic Documents Act (PIPEDA), and the European Union’s General Data Protection Regulation (GDPR), and frameworks such as ISO 31700, ISO 29100, ISO 27701, Federal Information Processing Standards (FIPS), and NIST Privacy Framework. In the US, regulatory actions include mitigating risks posed by the increased adoption of ai, protecting workers affected by generative ai, and providing stronger consumer protections. The EU ai Act includes new assessment and compliance requirements.
As ai regulations continue to take shape, organizations are advised to establish responsible ai as a C-level priority, set and enforce clear governance policies and processes around ai/ML, and involve diverse stakeholders in decision-making processes. The evolving regulations emphasize the need for comprehensive ai governance policies that cover the entire ai/ML lifecycle, and regular audits and reviews of ai systems to address biases, transparency, and explainability in algorithms. Adherence to standards fosters trust, mitigates risks, and promotes responsible deployment of these advanced technologies.
Conclusion
The journey to establishing a successful ai/ML center of excellence is a multifaceted endeavor that requires dedication and strategic planning, while operating with agility and collaborative spirit. As the landscape of artificial intelligence and machine learning continues to evolve at a rapid pace, the creation of an ai/ML CoE represents a necessary step towards harnessing these technologies for transformative impact. By focusing on the key considerations, from defining a clear mission to fostering innovation and enforcing ethical governance, organizations can lay a solid foundation for ai/ML initiatives that drive value. Moreover, an ai/ML CoE is not just a hub for technological innovation; it’s a beacon for cultural change within the organization, promoting a mindset of continuous learning, ethical responsibility, and cross-functional collaboration.
Stay tuned as we continue to explore the ai/ML CoE topics in our upcoming posts in this series. If you need help establishing an ai/ML Center of Excellence, please reach out to a specialist.
About the Authors
Ankush Chauhan is a Sr. Manager, Customer Solutions at AWS based in New York, US. He supports Capital Markets customers optimize their cloud journey, scale adoption, and realize the transformative value of building and inventing in the cloud. In addition, he is focused on enabling customers on their ai/ML journeys including generative ai. Beyond work, you can find Ankush running, hiking, or watching soccer.
Ava Kong is a Generative ai Strategist at the AWS Generative ai Innovation Center, specializing in the financial services sector. Based in New York, Ava has worked closely with a variety of financial institutions on a variety of use cases, combining the latest in generative ai technology with strategic insights to enhance operational efficiency, drive business outcomes, and demonstrate the broad and impactful application of ai technologies.
Vikram Elango is a Sr. ai/ML Specialist Solutions Architect at AWS, based in Virginia, US. He is currently focused on generative ai, LLMs, prompt engineering, large model inference optimization, and scaling ML across enterprises. Vikram helps financial and insurance industry customers with design and thought leadership to build and deploy machine learning applications at scale. In his spare time, he enjoys traveling, hiking, cooking, and camping with his family.
Rifat Jafreen is a Generative ai Strategist in the AWS Generative ai Innovation center where her focus is to help customers realize business value and operational efficiency by using generative ai. She has worked in industries across telecom, finance, healthcare and energy; and onboarded machine learning workloads for numerous customers. Rifat is also very involved in MLOps, FMOps and Responsible ai.
Authors would like to extend special thanks to Arslan Hussain, David Ping, Jarred Graber, and Raghvender Arni, for their support, expertise, and guidance.