Editor's Image | Mid-journey and Canva
Robin Sharma said: “Every teacher was once a beginner. Every professional was once an amateur.“You've heard about Large Language Models (LLM), ai, and Transformer Models (GPT) that are making waves in the ai space for a while, and you don't know how to get started. I can assure you that everyone you see today The Creating complex applications was once there.
That is why, in this article, you will be impacted by the knowledge you need to start creating LLM applications with the Python programming language. This is strictly beginner friendly and you can code while reading this article.
What will you build in this article? You will create a simple ai personal assistant that generates a response based on the user's request and deploys it to be accessed globally. The image below shows what the finished application looks like.
<img decoding="async" alt="This image shows the user interface of the ai personal assistant that will be created in this article.” width=”100%” src=”https://technicalterrence.com/wp-content/uploads/2024/06/1717700989_457_Beginner39s-Guide-to-Building-LLM-Applications-with-Python.png”/><img decoding="async" src="https://technicalterrence.com/wp-content/uploads/2024/06/1717700989_457_Beginner39s-Guide-to-Building-LLM-Applications-with-Python.png" alt="This image shows the user interface of the ai personal assistant that will be created in this article.” width=”100%”/>
This image shows the user interface of the ai personal assistant that will be created in this article.
Previous requirements
In order to move forward with this article, there are a few things you need to keep in check. This includes:
- Piton (3.5+) and experience writing Python scripts.
- Open ai: Open ai is a research organization and technology company that aims to ensure that artificial general intelligence (AGI) benefits all of humanity. One of its key contributions is the development of advanced LLMs such as GPT-3 and GPT-4. These models can understand and generate human-like text, making them powerful tools for various applications such as chatbots, content creation, and more.
Register for OpenAI and copy your API keys from the API section in your account so you can access the models. Install OpenAI on your computer using the following command:
- Lang chain:LangChain is a framework designed to simplify the development of applications that leverage LLMs. It provides tools and utilities to manage and optimize the various aspects of working with LLM, making it easy to create complex and robust applications.
Install LangChain on your computer using the following command:
- Illuminated: illuminated is a powerful and easy-to-use Python library for building web applications. Streamlit allows you to create interactive web applications using Python only. You don't need web development experience (HTML, CSS, JavaScript) to create functional and visually appealing web applications.
It is beneficial for building data science and machine learning applications, including those using LLM. Install streamlit on your computer using the following command:
Code throughout
With all the necessary packages and libraries installed, it's time to start creating the LLM application. Create a requirement.txt in the root directory of your working directory and save the dependencies.
streamlit
openai
langchain
Create an app.py file and add the code below.
# Importing the necessary modules from the Streamlit and LangChain packages
import streamlit as st
from langchain.llms import OpenAI
- Imports the Streamlit library, which is used to create interactive web applications.
- langchain.llms import OpenAI imports the OpenAI class from the langchain.llms module, which is used to interact with OpenAI language models.
# Setting the title of the Streamlit application
st.title('Simple LLM-App 🤖')
- st.title('Simple LLM-App ') sets the title of the Streamlit website.
# Creating a sidebar input widget for the OpenAI API key, input type is password for security
openai_api_key = st.sidebar.text_input('OpenAI API Key', type="password")
- openai_api_key = st.sidebar.text_input('OpenAI API Key', type=”password”) creates a text input widget in the sidebar for the user to enter their OpenAI API key. The input type is set to 'password' to hide the entered text for security.
# Defining a function to generate a response using the OpenAI language model
def generate_response(input_text):
# Initializing the OpenAI language model with a specified temperature and API key
llm = OpenAI(temperature=0.7, openai_api_key=openai_api_key)
# Displaying the generated response as an informational message in the Streamlit app
st.info(llm(input_text))
- def generate_response(input_text) defines a function called generate_response that takes input_text as an argument.
- llm = OpenAI(temperature=0.7, openai_api_key=openai_api_key) initializes the OpenAI class with a temperature setting of 0.7 and the provided API key.
Temperature is a parameter used to control the randomness or creativity of the text generated by a language model. Determines how much variability the model introduces into its predictions.
- Low temperature (0.0 – 0.5): This makes the model more deterministic and focused.
- Average temperature (0.5 – 1.0): Provides a balance between randomness and determinism.
- High temperature (1.0 and above): Increases the randomness of the output. Higher values make the model more creative and diverse in its responses, but this can also lead to less consistency and more nonsensical or off-topic results.
- st.info(llm(input_text)) calls the language model with the provided input_text and displays the generated response as an informational message in the Streamlit application.
# Creating a form in the Streamlit app for user input
with st.form('my_form'):
# Adding a text area for user input
text = st.text_area('Enter text:', '')
# Adding a submit button for the form
submitted = st.form_submit_button('Submit')
# Displaying a warning if the entered API key does not start with 'sk-'
if not openai_api_key.startswith('sk-'):
st.warning('Please enter your OpenAI API key!', icon='⚠')
# If the form is submitted and the API key is valid, generate a response
if submitted and openai_api_key.startswith('sk-'):
generate_response(text)
- with st.form('my_form') creates a form container called my_form.
- text = st.text_area('Enter text:', '') adds a text area input widget inside the form for the user to enter text.
- submitted = st.form_submit_button('Submit') adds a submit button to the form.
- if not, openai_api_key.startswith('sk-') checks if the entered API key does not start with sk-.
- st.warning('Enter your OpenAI API key!', icon='') displays a warning message if the API key is invalid.
- if submitted and openai_api_key.startswith('sk-') checks if the form is submitted and the API key is valid.
- generate_response(text) calls the generate_response function with the entered text to generate and display the response.
Putting it together here is what you have:
# Importing the necessary modules from the Streamlit and LangChain packages
import streamlit as st
from langchain.llms import OpenAI
# Setting the title of the Streamlit application
st.title('Simple LLM-App 🤖')
# Creating a sidebar input widget for the OpenAI API key, input type is password for security
openai_api_key = st.sidebar.text_input('OpenAI API Key', type="password")
# Defining a function to generate a response using the OpenAI model
def generate_response(input_text):
# Initializing the OpenAI model with a specified temperature and API key
llm = OpenAI(temperature=0.7, openai_api_key=openai_api_key)
# Displaying the generated response as an informational message in the Streamlit app
st.info(llm(input_text))
# Creating a form in the Streamlit app for user input
with st.form('my_form'):
# Adding a text area for user input with a default prompt
text = st.text_area('Enter text:', '')
# Adding a submit button for the form
submitted = st.form_submit_button('Submit')
# Displaying a warning if the entered API key does not start with 'sk-'
if not openai_api_key.startswith('sk-'):
st.warning('Please enter your OpenAI API key!', icon='⚠')
# If the form is submitted and the API key is valid, generate a response
if submitted and openai_api_key.startswith('sk-'):
generate_response(text)
Running the application
The application is ready; You must run the application script using the appropriate command for the framework you are using.
By running this code using streamlit run app.py, you create an interactive web application where users can enter messages and receive text responses generated by LLM.
When you run streamlit run app.py, the following happens:
- Streamlit server starts: Streamlit starts a local web server on your machine, usually accessible at `http://localhost:8501` by default.
- Code execution: Streamlit reads and executes the code in `app.py`, rendering the application as defined in the script.
- web interface: Your web browser automatically opens (or you can navigate manually) to the URL provided by Streamlit (usually http://localhost:8501), where you can interact with your LLM application.
Implementing your LLM application
Deploying an LLM application means making it accessible over the Internet so that others can use and test it without needing to access their local computer. This is important for collaboration, user feedback, and real-world testing, ensuring the app works well in various environments.
To deploy your application to Streamlit Cloud, follow these steps:
- Create a GitHub repository for your app. Make sure your repository includes two files: application.py yrrequirements.txt
- Gonna Streamlit Community Cloudclick on the “New app” from your workspace and specify the repository, branch, and parent file path.
- Click on the Deploy and your LLM application will now be deployed to the Streamlit Community Cloud and accessible globally.
Conclusion
Congratulations! You have taken your first steps in creating and deploying an LLM application with Python. By understanding the prerequisites, installing the necessary libraries, and writing the core code of the application, you have now created a functional ai personal assistant. By using Streamlit, he made his application interactive and easy to use, and by deploying it to the Streamlit Community Cloud, he made it accessible to users around the world.
With the skills you've learned in this guide, you can delve deeper into LLMs and ai, explore more advanced features, and create even more sophisticated applications. Keep experimenting, learning and sharing your knowledge with the community. The possibilities with LLMs are vast and your journey is just beginning. Happy coding!
Olumida of Shittu is a software engineer and technical writer passionate about leveraging cutting-edge technologies to create compelling narratives, with a keen eye for detail and a knack for simplifying complex concepts. You can also find Shittu at twitter.com/Shittu_Olumide_”>twitter.
<script async src="//platform.twitter.com/widgets.js” charset=”utf-8″>