, I’ll present you step-by-step construct and deploy a chat powered with LLM — Gemini — in Streamlit and monitor the API utilization on Google Cloud Console. Streamlit is a Python framework that makes it tremendous simple to show your Python scripts into interactive net apps, with virtually no front-end work.
Lately, I constructed a undertaking, bordAI — a chat assistant powered by LLM built-in with instruments I developed to assist embroidery initiatives. After that, I made a decision to begin this collection of posts to share suggestions I’ve discovered alongside the best way.
Right here’s a fast abstract of the publish:
1 to six — Challenge Setup
7 to 13 — Constructing the Chat
14 to fifteen— Deploy and Monitor the app
1. Create a New GitHub repository
Go to GitHub and create a brand new repository.
2. Clone the repository domestically
→ Execute this command in your terminal to clone it:
git clone <your-repository-url>
3. Set Up a Digital Atmosphere (non-compulsory)
A Digital Atmosphere is sort of a separate area in your pc the place you possibly can set up a particular model of Python and libraries with out affecting the remainder of your system. That is helpful as a result of completely different initiatives would possibly want completely different variations of the identical libraries.
→ To create a digital surroundings:
pyenv virtualenv 3.9.14 chat-streamlit-tutorial
→ To activate it:
pyenv activate chat-streamlit-tutorial
4. Challenge Construction
A undertaking construction is only a method to arrange all of the recordsdata and folders on your undertaking. Ours will appear to be this:
chat-streamlit-tutorial/
│
├── .env
├── .gitignore
├── app.py
├── features.py
├── necessities.txt
└── README.md
.env
→ file the place you retailer your API key (not pushed to GitHub).gitignore
→ file the place you checklist the recordsdata or folders for git to disregardapp.py
→ principal streamlit appfeatures.py
→ customized features to higher arrange the codenecessities.txt
→ checklist of libraries your undertaking wantsREADME.md
→ file that explains what your undertaking is about
→ Execute this inside your undertaking folder to create these recordsdata:
contact .env .gitignore app.py features.py necessities.txt
→ Contained in the file .gitignore
, add:
.env
__pycache__/
→ Add this to the necessities.txt
:
streamlit
google-generativeai
python-dotenv
→ Set up dependencies:
pip set up -r necessities.txt
5. Get API Key
An API Key is sort of a password that tells a service you could have permission to make use of it. On this undertaking, we’ll use the Gemini API as a result of they’ve a free tier, so you possibly can mess around with it with out spending cash.
Don’t arrange billing when you simply need to use the free tier. It ought to say “Free” beneath “Plan”, similar to right here:
We’ll use gemini-2.0-flash on this undertaking. It gives a free tier, as you possibly can see within the desk beneath:

- 15 RPM = 15 Requests per minute
- 1,000,000 TPM = 1 Million Tokens Per Minute
- 1,500 RPD = 1,500 Requests Per Day
Observe: These limits are correct as of April 2025 and should change over time.
Only a heads up: in case you are utilizing the free tier, Google might use your prompts to enhance their merchandise, together with human critiques, so it’s not advisable to ship delicate data. If you wish to learn extra about this, test this link.
6. Retailer your API Key
We’ll retailer our API Key inside a .env
file. A .env
file is an easy textual content file the place you retailer secret data, so that you don’t write it instantly in your code. We don’t need it going to GitHub, so now we have so as to add it to our .gitignore
file. This file determines which recordsdata git ought to actually ignore if you push your adjustments to the repository. I’ve already talked about this partially 4, “Challenge Construction”, however simply in case you missed it, I’m repeating it right here.
This step is basically necessary, don’t neglect it!
→ Add this to .gitignore
:
.env
__pycache__/
→ Add the API Key to .env
:
API_KEY= "your-api-key"
In the event you’re working domestically, .env
works advantageous. Nevertheless, when you’re deploying in Streamlit later, you’ll have to use st.secrets and techniques
. Right here I’ve included a code that may work in each situations.
→Add this perform to your features.py
:
import streamlit as st
import os
from dotenv import load_dotenv
def get_secret(key):
"""
Get a secret from Streamlit or fallback to .env for native growth.
This enables the app to run each on Streamlit Cloud and domestically.
"""
attempt:
return st.secrets and techniques[key]
besides Exception:
load_dotenv()
return os.getenv(key)
→ Add this to your app.py
:
import streamlit as st
import google.generativeai as genai
from features import get_secret
api_key = get_secret("API_KEY")
7. Select the mannequin
I selected gemini-2.0-flash for this undertaking as a result of I feel it’s an awesome mannequin with a beneficiant free tier. Nevertheless, you possibly can discover different mannequin choices that additionally provide free tiers and select your most popular one.

- Professional: fashions designed for excessive–high quality outputs, together with reasoning and creativity. Typically used for complicated duties, problem-solving, and content material era. They’re multimodal — this implies they will course of textual content, picture, video, and audio for enter and output.
- Flash: fashions projected for velocity and value effectivity. Can have lower-quality solutions in comparison with the Professional for complicated duties. Typically used for chatbots, assistants, and real-time purposes like automated phrase completion. They’re multimodal for enter, and for output is at the moment simply textual content, different options are in growth.
- Lite: even sooner and cheaper than Flash, however with some diminished capabilities, reminiscent of it’s multimodal just for enter and text-only output. Its principal attribute is that it’s extra economical than the Flash, very best for producing massive quantities of textual content inside value restrictions.
This link has loads of particulars concerning the fashions and their variations.
Right here we’re establishing the mannequin. Simply exchange “gemini-2.0-flash” with the mannequin you’ve chosen.
→ Add this to your app.py
:
genai.configure(api_key=api_key)
mannequin = genai.GenerativeModel("gemini-2.0-flash")
8. Construct the chat
First, let’s focus on the important thing ideas we’ll use:
st.session_state
: this works like a reminiscence on your app. Streamlit reruns your script from prime to backside each time one thing adjustments — if you ship a message or click on a button — so usually, all of the variables could be reset. This enables Streamlit to recollect values between reruns. Nevertheless, when you refresh your net web page you’ll lose thesession_state
.st.chat_message(title, avatar)
: Creates a chat bubble for a message within the interface. The primary parameter is the title of the message writer, which could be “consumer”, “human”, “assistant”, “ai”, or str. In the event you use consumer/human and assistant/ai, it already has default avatars of consumer and bot icons. You may change this if you wish to. Take a look at the documentation for extra particulars.st.chat_input(placeholder)
: Shows an enter field on the backside for the consumer to sort messages. It has many parameters, so I like to recommend you take a look at the documentation.
First, I’ll clarify every a part of the code individually, and after I’ll present you the entire code collectively.
This preliminary step initializes your session_state
, the app’s “reminiscence”, to maintain all of the messages inside one session.
if "chat_history" not in st.session_state:
st.session_state.chat_history = []
Subsequent, we’ll set the primary default message. That is non-compulsory, however I like so as to add it. You could possibly add some preliminary directions if appropriate on your context. Each time Streamlit runs the web page and st.session_state.chat_history
is empty, it’ll append this message to the historical past with the position “assistant”.
if not st.session_state.chat_history:
st.session_state.chat_history.append(("assistant", "Hello! How can I provide help to?"))
In my app bordAI, I added this preliminary message giving context and directions for my app:

For the consumer half, the primary line creates the enter field. If user_message
incorporates content material, it writes it to the interface after which appends it to chat_history
.
user_message = st.chat_input("Sort your message...")
if user_message:
st.chat_message("consumer").write(user_message)
st.session_state.chat_history.append(("consumer", user_message))
Now let’s add the assistant half:
system_prompt
is the immediate despatched to the mannequin. You could possibly simply ship theuser_message
instead offull_input
(have a look at the code beneath). Nevertheless, the output won’t be exact. A immediate gives context and directions about how you need the mannequin to behave, not simply what you need it to reply. An excellent immediate makes the mannequin’s response extra correct, constant, and aligned together with your objectives. As well as, with out telling how our mannequin ought to behave, it’s susceptible to immediate injections.
Immediate injection is when somebody tries to govern the mannequin’s immediate with a view to alter its conduct. One method to mitigate that is to construction prompts clearly and delimit the consumer’s message inside triple quotes.
We’ll begin with a easy and unclear system_prompt
and within the subsequent session we’ll make it higher to match the distinction.
full_input
: right here, we’re organizing the enter, delimiting the consumer message with triple quotes (“””). This doesn’t stop all immediate injections, however it’s one method to create higher and extra dependable interactions.response
: sends a request to the API, storing the output in response.assistant_reply
: extracts the textual content from the response.
Lastly, we use st.chat_message()
mixed to write()
to show the assistant reply and append it to the st.session_state.chat_history
, similar to we did with the consumer.
if user_message:
st.chat_message("consumer").write(user_message)
st.session_state.chat_history.append(("consumer", user_message))
system_prompt = f"""
You're an assistant.
Be good and type in all of your responses.
"""
full_input = f"{system_prompt}nnUser message:n"""{user_message}""""
response = mannequin.generate_content(full_input)
assistant_reply = response.textual content
st.chat_message("assistant").write(assistant_reply)
st.session_state.chat_history.append(("assistant", assistant_reply))
Now let’s see every thing collectively!
→ Add this to your app.py
:
import streamlit as st
import google.generativeai as genai
from features import get_secret
api_key = get_secret("API_KEY")
genai.configure(api_key=api_key)
mannequin = genai.GenerativeModel("gemini-2.0-flash")
if "chat_history" not in st.session_state:
st.session_state.chat_history = []
if not st.session_state.chat_history:
st.session_state.chat_history.append(("assistant", "Hello! How can I provide help to?"))
user_message = st.chat_input("Sort your message...")
if user_message:
st.chat_message("consumer").write(user_message)
st.session_state.chat_history.append(("consumer", user_message))
system_prompt = f"""
You're an assistant.
Be good and type in all of your responses.
"""
full_input = f"{system_prompt}nnUser message:n"""{user_message}""""
response = mannequin.generate_content(full_input)
assistant_reply = response.textual content
st.chat_message("assistant").write(assistant_reply)
st.session_state.chat_history.append(("assistant", assistant_reply))
To run and check your app domestically, first navigate to the undertaking folder, then execute the next command.
→ Execute in your terminal:
cd chat-streamlit-tutorial
streamlit run app.py
Yay! You now have a chat working in Streamlit!
9. Immediate Engineering
Immediate Engineering is a strategy of writing directions to get the absolute best output from an AI mannequin.
There are many methods for immediate engineering. Listed below are 5 suggestions:
- Write clear and particular directions.
- Outline a task, anticipated conduct, and guidelines for the assistant.
- Give the correct amount of context.
- Use the delimiters to point consumer enter (as I defined partially 8).
- Ask for the output in a specified format.
The following pointers could be utilized to the system_prompt
or if you’re writing a immediate to work together with the chat assistant.
Our present system immediate is:
system_prompt = f"""
You're an assistant.
Be good and type in all of your responses.
"""
It’s tremendous imprecise and gives no steering to the mannequin.
- No clear path for the assistant, what sort of assist it ought to present
- No specification of the position or what’s the subject of the help
- No tips for structuring the output
- No context on whether or not it must be technical or informal
- Lack of boundaries
We will enhance our immediate based mostly on the guidelines above. Right here’s an instance.
→ Change the system_prompt
within the app.py
:
system_prompt = f"""
You're a pleasant and a programming tutor.
All the time clarify ideas in a easy and clear means, utilizing examples when doable.
If the consumer asks one thing unrelated to programming, politely deliver the dialog again to programming matters.
"""
full_input = f"{system_prompt}nnUser message:n"""{user_message}""""
If we ask “What’s python?” to the outdated immediate, it simply offers a generic brief reply:

With the brand new immediate, it gives a extra detailed response with examples:


Strive altering the system_prompt
your self to see the distinction within the mannequin outputs and craft the best immediate on your context!
10. Select Generate Content material Parameters
There are lots of parameters you possibly can configure when producing content material. Right here I’ll exhibit how temperature
and maxOutputTokens
work. Verify the documentation for extra particulars.
temperature
: controls the randomness of the output, starting from 0 to 2. The default is 1. Decrease values produce extra deterministic outputs, whereas increased values produce extra artistic ones.maxOutputTokens
: the utmost variety of tokens that may be generated within the output. A token is roughly 4 characters.
To vary the temperature dynamically and check it, you possibly can create a sidebar slider to manage this parameter.
→ Add this to app.py
:
temperature = st.sidebar.slider(
label="Choose the temperature",
min_value=0.0,
max_value=2.0,
worth=1.0
)
→ Change the response
variable to:
response = mannequin.generate_content(
full_input,
generation_config={
"temperature": temperature,
"max_output_tokens": 1000
}
)
The sidebar will appear to be this:

Strive adjusting the temperature to see how the output adjustments!
11. Show chat historical past
This step ensures that you simply maintain observe of all of the exchanged messages within the chat, so you possibly can see the chat historical past. With out this, you’d solely see the most recent messages from the assistant and consumer every time you ship one thing.
This code accesses every thing appended to chat_history
and shows it within the interface.
→ Add this earlier than the if user_message
in app.py
:
for position, message in st.session_state.chat_history:
st.chat_message(position).write(message)
Now, all of the messages inside one session are saved seen within the interface:

Obs: I attempted to ask a non-programming query, and the assistant tried to alter the topic again to programming. Our immediate is working!
12. Chat with reminiscence
Apart from having messages saved in chat_history
, our mannequin isn’t conscious of the context of our dialog. It’s stateless, every transaction is impartial.

To resolve this, now we have to cross all this context inside our immediate so the mannequin can reference earlier messages exchanged.
Create context
which is an inventory containing all of the messages exchanged till that second. Including lastly the newest consumer message, so it doesn’t get misplaced within the context.
system_prompt = f"""
You're a pleasant and educated programming tutor.
All the time clarify ideas in a easy and clear means, utilizing examples when doable.
If the consumer asks one thing unrelated to programming, politely deliver the dialog again to programming matters.
"""
full_input = f"{system_prompt}nnUser message:n"""{user_message}""""
context = [
*[
{"role": role, "parts": [{"text": msg}]} for position, msg in st.session_state.chat_history
],
{"position": "consumer", "elements": [{"text": full_input}]}
]
response = mannequin.generate_content(
context,
generation_config={
"temperature": temperature,
"max_output_tokens": 1000
}
)
Now, I informed the assistant that I used to be engaged on a undertaking to investigate climate information. Then I requested what the theme of my undertaking was and it appropriately answered “climate information evaluation”, because it now has the context of the earlier messages.

In case your context will get too lengthy, you possibly can think about summarizing it to save lots of prices, for the reason that extra tokens you ship to the API, the extra you’ll pay.
13. Create a Reset Button (non-compulsory)
I like including a reset button in case one thing goes flawed or the consumer simply desires to clear the dialog.
You simply have to create a perform to set de chat_history
as an empty checklist. In the event you created different session states, it is best to set them right here as False or empty, too.
→ Add this to features.py
:
def reset_chat():
"""
Reset the Streamlit chat session state.
"""
st.session_state.chat_history = []
st.session_state.instance = False # Add others if wanted
→ And if you would like it within the sidebar, add this to app.py
:
from features import get_secret, reset_chat
if st.sidebar.button("Reset chat"):
reset_chat()
It should appear to be this:

Every part collectively:
import streamlit as st
import google.generativeai as genai
from features import get_secret, reset_chat
api_key = get_secret("API_KEY")
genai.configure(api_key=api_key)
mannequin = genai.GenerativeModel("gemini-2.0-flash")
temperature = st.sidebar.slider(
label="Choose the temperature",
min_value=0.0,
max_value=2.0,
worth=1.0
)
if st.sidebar.button("Reset chat"):
reset_chat()
if "chat_history" not in st.session_state:
st.session_state.chat_history = []
if not st.session_state.chat_history:
st.session_state.chat_history.append(("assistant", "Hello! How can I provide help to?"))
for position, message in st.session_state.chat_history:
st.chat_message(position).write(message)
user_message = st.chat_input("Sort your message...")
if user_message:
st.chat_message("consumer").write(user_message)
st.session_state.chat_history.append(("consumer", user_message))
system_prompt = f"""
You're a pleasant and a programming tutor.
All the time clarify ideas in a easy and clear means, utilizing examples when doable.
If the consumer asks one thing unrelated to programming, politely deliver the dialog again to programming matters.
"""
full_input = f"{system_prompt}nnUser message:n"""{user_message}""""
context = [
*[
{"role": role, "parts": [{"text": msg}]} for position, msg in st.session_state.chat_history
],
{"position": "consumer", "elements": [{"text": full_input}]}
]
response = mannequin.generate_content(
context,
generation_config={
"temperature": temperature,
"max_output_tokens": 1000
}
)
assistant_reply = response.textual content
st.chat_message("assistant").write(assistant_reply)
st.session_state.chat_history.append(("assistant", assistant_reply))
14. Deploy
In case your repository is public, you possibly can deploy with Streamlit without spending a dime.
MAKE SURE YOU DO NOT HAVE API KEYS ON YOUR PUBLIC REPOSITORY.
First, save and push your code to the repository.
→ Execute in your terminal:
git add .
git commit -m "tutorial chat streamlit"
git push origin principal
Pushing instantly into the principal
isn’t a finest follow, however because it’s only a easy tutorial, we’ll do it for comfort.
- Go to your streamlit app that’s working domestically.
- Click on on “Deploy” on the prime proper.
- In Streamlit Group Cloud, click on “Deploy now”.
- Fill out the data.

5. Click on on “Superior settings” and write API_KEY="your-api-key"
, similar to you probably did with the .env
file.
6. Click on “Deploy”.
All accomplished! In the event you’d like, take a look at my app here! 🎉
15. Monitor API utilization on Google Console
The final a part of this publish exhibits you monitor API utilization on the Google Cloud Console. That is necessary when you deploy your app publicly, so that you don’t have any surprises.
- Entry Google Cloud Console.
- Go to “APIs and companies”.
- Click on on “Generative Language API”.

- Requests: what number of occasions your API was known as. In our case, the API known as every time we run
mannequin.generate_content(context)
. - Error (%): the share of requests that failed. Errors can have the code 4xx which is often the consumer’s/requester’s fault — for example, 400 for unhealthy enter, and 429 means you’re hitting the API too incessantly. As well as, errors with the code 5xx are often the system’s/server’s fault and are much less widespread. Google usually retries internally or recommends retrying after a couple of seconds — e.g. 500 for Inner Server Error and 503 for Service Unavailable.
- Latency, median (ms): This exhibits how lengthy (in milliseconds) it takes on your service to reply, on the fiftieth percentile — that means half the requests are sooner and half are slower. It’s an excellent common measure of your service’s velocity, answering the query, “How briskly is it usually?”.
- Latency, 95% (ms): This exhibits the response time on the ninety fifth percentile — that means 95% of requests are sooner than this time, and solely 5% slower. It helps to establish how your system behaves beneath heavy load or with slower circumstances, answering the query, “How unhealthy is it getting for some customers?”.
A fast instance of the distinction between Latency median and Latency p95:
Think about your service often responds in 200ms:
- Median latency = 200ms (good!)
- p95 latency = 220ms (additionally good)
Now beneath heavy load:
- Median latency = 220ms (nonetheless appears to be like OK)
- p95 latency = 1200ms (not good)
The metric p95 exhibits that 5% of your customers are ready greater than 1.2 seconds — a a lot worse expertise. If we had regarded simply on the median, we’d assume every thing was advantageous, however p95 exhibits hidden issues.
Persevering with within the “Metrics” web page, you’ll discover graphs and, on the backside, the strategies known as by the API. Additionally, in “Quotas & System Limits”, you possibly can monitor the API utilization in comparison with the free tier restrict.

Click on “Present utilization chart” to match utilization daily.

I hope you loved this tutorial.
You’ll find all of the code for this undertaking on my GitHub.
I’d love to listen to your ideas! Let me know within the feedback what you suppose.
Observe me on: