Building Your First AI Chatbot with Python in Under 1 Hour

📅 2026-04-25 · AI Quick Start Guide · ~ 32 min read

Ever built something that felt like magic the first time it worked? That’s exactly the feeling when your Python script actually talks back to you. Building an AI chatbot used to require a team of engineers and months of training. Today, you can do it in under an hour with a few lines of Python and a free API key.

This tutorial walks you through creating a functional AI chatbot from scratch. You’ll learn how to set up your environment, connect to a large language model, and build a simple command-line interface. By the end, you’ll have a working AI chatbot that you can extend into a full-fledged Python chatbot for your own projects. And if you’re serious about learning to build AI app solutions, this is your perfect starting point.

Let’s get our hands dirty.

---

What You’ll Need Before You Start

Before we write a single line of code, let’s gather the ingredients. Think of this like prepping your kitchen before cooking — it saves time and prevents frustration.

Prerequisites

Setting Up Your Environment

Open your terminal (Command Prompt on Windows, Terminal on macOS/Linux) and create a new project folder:

mkdir my-first-chatbot
cd my-first-chatbot

Now create a virtual environment to keep dependencies isolated:

python -m venv venv

Activate it:

You should see (venv) appear in your terminal prompt.

Install the only external package we need:

pip install openai

That’s it. One package. We’ll keep things lean so you understand every moving part.

---

Building the Core Chatbot Logic

Now comes the fun part — writing the code that makes your AI chatbot actually think and respond. We’ll start with the simplest possible version, then add polish.

Step 1: The Minimal Chatbot

Create a file called chatbot.py in your project folder. Open it in your favorite text editor and paste this:

import openai

# Replace with your actual API key
openai.api_key = "sk-your-api-key-here"

def get_response(user_input):
    response = openai.ChatCompletion.create(
        model="gpt-3.5-turbo",
        messages=[
            {"role": "system", "content": "You are a helpful assistant."},
            {"role": "user", "content": user_input}
        ]
    )
    return response.choices[0].message.content

print("Chatbot ready! Type 'quit' to exit.")
while True:
    user_input = input("You: ")
    if user_input.lower() == "quit":
        break
    reply = get_response(user_input)
    print(f"Bot: {reply}")

Replace "sk-your-api-key-here" with your actual API key. Run the script:

python chatbot.py

Type something like “Tell me a joke” and watch your Python chatbot come to life.

What’s happening under the hood?

This is the core pattern for any AI chatbot — send a prompt, get a reply, repeat.

Step 2: Adding Conversation Memory

Right now, your bot has amnesia. It forgets everything you said before. Real chatbots need context. Let’s fix that by keeping a conversation history.

Update your chatbot.py:

import openai

openai.api_key = "sk-your-api-key-here"

# This list stores the entire conversation
conversation = [
    {"role": "system", "content": "You are a friendly and knowledgeable assistant."}
]

def get_response(user_input):
    conversation.append({"role": "user", "content": user_input})
    response = openai.ChatCompletion.create(
        model="gpt-3.5-turbo",
        messages=conversation
    )
    reply = response.choices[0].message.content
    conversation.append({"role": "assistant", "content": reply})
    return reply

print("Chatbot with memory ready! Type 'quit' to exit.")
while True:
    user_input = input("You: ")
    if user_input.lower() == "quit":
        break
    reply = get_response(user_input)
    print(f"Bot: {reply}")

Now your Python chatbot remembers previous messages. Try saying “My name is Alex” and then “What’s my name?” — it’ll answer correctly.

Why this matters: Every time you append to the conversation list, you’re building context. This is how professional build AI app projects handle state — by maintaining a history of interactions.

Step 3: Handling Errors Gracefully

APIs fail. Networks drop. Keys expire. A robust AI chatbot should handle these gracefully. Add error handling around the API call:

import openai
import time

openai.api_key = "sk-your-api-key-here"

conversation = [
    {"role": "system", "content": "You are a helpful assistant."}
]

def get_response(user_input):
    conversation.append({"role": "user", "content": user_input})
    try:
        response = openai.ChatCompletion.create(
            model="gpt-3.5-turbo",
            messages=conversation,
            max_tokens=150,
            temperature=0.7
        )
        reply = response.choices[0].message.content
    except openai.error.RateLimitError:
        reply = "I'm getting too many requests. Please wait a moment."
        time.sleep(2)
    except openai.error.AuthenticationError:
        reply = "API key error. Check your key."
    except Exception as e:
        reply = f"An error occurred: {str(e)}"
    conversation.append({"role": "assistant", "content": reply})
    return reply

print("Robust chatbot ready! Type 'quit' to exit.")
while True:
    user_input = input("You: ")
    if user_input.lower() == "quit":
        break
    reply = get_response(user_input)
    print(f"Bot: {reply}")

We added max_tokens to limit response length and temperature to control creativity (0 = deterministic, 1 = wild). Try different values to see how your AI chatbot changes personality.

---

Taking It Further: Adding a Web Interface

A command-line bot is fun, but a real build AI app project needs a user interface. Let’s add a simple web UI using Flask. This turns your Python chatbot into something you can share with others.

Step 4: Install Flask and Build a Web App

Install Flask:

pip install flask

Create a new file called web_chatbot.py:

from flask import Flask, request, jsonify, render_template_string
import openai

app = Flask(__name__)
openai.api_key = "sk-your-api-key-here"

conversations = {}  # Store conversations per session

HTML_TEMPLATE = """
<!DOCTYPE html>
<html>
<head>
    <title>My AI Chatbot</title>
    <style>
        body { font-family: Arial; max-width: 600px; margin: auto; padding: 20px; }
        #chat { height: 400px; overflow-y: scroll; border: 1px solid #ccc; padding: 10px; }
        input { width: 80%; padding: 8px; }
        button { padding: 8px; }
    </style>
</head>
<body>
    <h2>AI Chatbot</h2>
    <div id="chat"></div>
    <input id="userInput" placeholder="Type your message..." />
    <button onclick="sendMessage()">Send</button>
    <script>
        function sendMessage() {
            var input = document.getElementById('userInput');
            var message = input.value;
            if (!message) return;
            var chat = document.getElementById('chat');
            chat.innerHTML += '<p><b>You:</b> ' + message + '</p>';
            input.value = '';
            fetch('/chat', {
                method: 'POST',
                headers: {'Content-Type': 'application/json'},
                body: JSON.stringify({message: message})
            })
            .then(response => response.json())
            .then(data => {
                chat.innerHTML += '<p><b>Bot:</b> ' + data.reply + '</p>';
                chat.scrollTop = chat.scrollHeight;
            });
        }
    </script>
</body>
</html>
"""

@app.route('/')
def index():
    return render_template_string(HTML_TEMPLATE)

@app.route('/chat', methods=['POST'])
def chat():
    data = request.get_json()
    user_message = data['message']
    # Use a simple session ID (in production, use proper sessions)
    session_id = request.remote_addr
    if session_id not in conversations:
        conversations[session_id] = [{"role": "system", "content": "You are a helpful assistant."}]
    conversations[session_id].append({"role": "user", "content": user_message})
    response = openai.ChatCompletion.create(
        model="gpt-3.5-turbo",
        messages=conversations[session_id]
    )
    reply = response.choices[0].message.content
    conversations[session_id].append({"role": "assistant", "content": reply})
    return jsonify({"reply": reply})

if __name__ == '__main__':
    app.run(debug=True, port=5000)

Run it with:

python web_chatbot.py

Open your browser to http://localhost:5000. You now have a web-based AI chatbot with persistent conversation per user session. This is the foundation you’d use to build AI app solutions for real users.

Step 5: What You Can Do Next

Your Python chatbot is functional, but here’s how to level it up:

For more advanced patterns — like handling multi-turn dialogues, integrating with external APIs, or building a full build AI app pipeline — check out the resources at aiflowyou.com. The site’s Learning Path section has structured guides that take you from this basic chatbot to production-ready AI applications.

Also, if you prefer learning on your phone, scan the WeChat Mini Program “AI快速入门手册” — it’s packed with bite-sized tutorials and cheatsheets that complement this tutorial perfectly.

---

Summary and Next Steps

You just built a complete AI chatbot in Python — from a bare-bones script to a web application — all in under an hour. Let’s recap what you accomplished:

The code you wrote today is the same pattern used by companies building customer support bots, personal assistants, and educational tools. You now have a reusable template to build AI app prototypes for any idea.

Your action steps:

Remember: every expert was once a beginner who wrote their first “Hello, World” bot. You’ve already gone beyond that. Now go build something awesome.

More AI learning resources at aiflowyou.com →

Mini Program QR

Scan to open Mini Program

WeChat QR

Scan to add on WeChat