Arun Gupta on Democratizing Enterprise AI Development

Semaphore
4 min readSep 25, 2024

--

In terms of creativity, diversity, and evolution pace, generative AI seems boundless. However, its enterprise implementation remains significantly limited. Businesses are uncertain about making the right choices around AI and struggle to familiarize themselves with the potential of this technology amid cutting through the fragmented offerings from major tech companies. In this episode, Arun Gupta, VP and GM for Open Ecosystem at Intel, tells us about the company’s contributions to the open-source community and shares his experience at the Open Platform for Enterprise AI (OPEA) developing open-platform AI solutions for businesses.

Edited transcription

Arun Gupta’s extensive experience driving cultural change toward open source software includes working at Apple (where he built their first open source program office), Amazon, Red Hat, and Sun Microsystems — and later Oracle, after Sun was acquired. Currently, he is the vice president and general manager of Intel’s Open Ecosystem, and consequently, one of the company’s leading voices on everything open source.

Across industries and computer technologies, companies rely on open-source software and expect it to work flawlessly out of the box. “That’s where Intel engineers go out in these open source communities,” says Arun, explaining that Intel’s involvement in open source is about satisfying its diverse client base across verticals and making sure the features are working as intended.

Contributing to the open-source community is as much part of Arun’s job as part of Intel’s DNA. “We have been the largest corporate contributor to Linux since 2007. We are one of the top contributors to Kubernetes. We participate in 300 plus open source projects: Kubernetes, OpenJDK, PyTorch, TensorFlow, GCC, Clang,” he affirms. Arun also points out that Intel’s participation in “700 plus standard bodies and foundations,” benefits the sustainability of the open-source community as much as the company itself.

Foundational leadership in community projects

Aside from code itself, Intel contributes to the sustainability of open source projects largely through foundational leadership. The company has recently launched the Unified Acceleration Foundation, which aims to create an open standard for accelerated computing. Intel is also “a premier member” of the LF AI & Data foundation, and holds a seat on the governing board. “The effort is to build and support an open AI and data community and data-driven open source innovation in the AI and data domains,” Arun explains.

From its leadership position, Arun argues, Intel can “strategically shape the direction and initiatives of the foundation […] and technically, strategically, administratively, financially steer the direction to represent our customers’ and developers’ interest.

Advocating for enterprise AI: The OPEA project

Despite the popularity and growing demand for AI solutions, businesses are still in the early stages of learning how to leverage them. Diving into this new field brings up a lot of inquiries about use cases, security, and the fittest implementation approach. Big tech companies, for their part, are aware of the lack of consensus and unfamiliarity around enterprise AI implementation and want to capitalize on it. “Every hyper scaler is crafting their own bespoke solution and saying, ‘We know it the best’,” says Arun.

To remedy this gap, Arun is currently deeply involved in the Open Platform for Enterprise AI (OPEA) project, an open platform initiative under the LF AI & Data foundation, aimed at simplifying and freeing the development of enterprise AI applications.

As a development framework, OPEA can be used to build gen AI solutions, such as chatbots, providing microservices components like embedding or data storage and the possibility of creating LLMs from your data with improved accuracy using Retrieval Augmented Generation (RAG). As a complementary feature, OPEA offers assessments to evaluate the readiness and trustworthiness of applications and determine at what stage of development they are.

As in the case of AI itself, Arun affirms the project is growing at a marvelous pace: “We launched the project three and a half months ago now. At that time, we had 14 launch partners. We are now up to more than 40 launch partners.” Among these partners are prominent actors such as Canonical, Infosys, Neo4j, Prediction Guard, and Docker, whose interest ranges across different types of AI applications fitting their business use case.

To balance this growth speed and array of contributors, the project has a technical steering committee with members across “a wide range of the community,” says Arun, including a couple from Intel, since “that’s where the largest code contribution is coming in from.” Still, to keep transparency, he keeps his project suggestions and issues in the Github repository, rather than hidden away in “ internal project meetings.” “I just fight the issue on GitHub,” says Arun, “because that’s part of the cultural change that we need to bring, that if the leaders of the project are going to do some sidebar discussions, how would people outside the company understand?

Regarding deployment, developers looking to get started quickly can package OPEA’s components as Docker containers and publish them in Docker Hub. This containerization strategy forms the backbone of OPEA’s deployment flexibility. “As we are building these microservices, using containers as a base layer for the microservices allows us to run it on any cloud-native platform,” Arun explains. Moreover, OPEA’s deployment options extend far beyond. “You could get this up and running on Kubernetes, for example, your EKS or GKE or AKS, doesn’t matter, pick your favorite flavor of Kubernetes, and we can get it up and running over there,” Arun explains.

The bottom line

OPEA is preparing a hackathon and inciting students and professionals worldwide to participate. If you want to be even more involved, OPEA also has working groups dedicated to different areas such as security, end users, and community. To learn more, visit opea.dev.

To keep updated with the OPEA project and participate in technical discussions first-hand, join OPEA’s mailing list and try the project out for yourself. If you have any questions about where to start, reach out to info@opea.dev.

Likewise, you can contribute by visiting and participating in OPEA’s Github page. Check out the GenAiComps and GenAIExamples repositories for a library of microservices components and a collection of generative AI examples respectively.

Follow Arun on X and Linkedin.

Note: Since this interview was recorded, OPEA has grown to more than 40 partners.

Originally published at https://semaphoreci.com on September 25, 2024.

--

--

Semaphore
Semaphore

Written by Semaphore

Supporting developers with insights and tutorials on delivering good software. · https://semaphoreci.com

No responses yet