Build a Personal AI Knowledge Base with Notion AI and RAG

📅 2026-05-09 · AI Quick Start Guide · ~ 32 min read

Imagine you’ve spent months curating bookmarks, saving PDFs, and jotting down notes across different apps. When you need to retrieve a specific insight, you find yourself scrolling through endless folders, frustrated by the friction. This is exactly the problem a personal AI knowledge base solves — and with the combination of Notion AI and a custom RAG (Retrieval-Augmented Generation) pipeline, you can build one that feels like a second brain.

In this guide, I’ll walk you through a practical, step-by-step approach to creating your own AI-powered knowledge base. You’ll learn how to structure your Notion workspace for optimal retrieval, integrate a lightweight RAG system using Python, and query your personal data with natural language. By the end, you’ll have a working prototype that turns your scattered notes into an intelligent, searchable assistant.

Why Notion AI Alone Isn’t Enough

Notion AI is fantastic for summarization, rewriting, and quick Q&A within a page. But it has a fundamental limitation: it can only “see” the content of the page you are currently viewing. It does not perform cross-page, semantic retrieval across your entire workspace. If you have a database of hundreds of notes, Notion AI cannot answer a question that requires synthesizing information from multiple pages.

This is where RAG comes in. RAG (Retrieval-Augmented Generation) combines a retrieval step — finding relevant documents from your knowledge base — with a generation step — feeding those documents to a large language model (LLM) to produce a grounded answer. By building a personal RAG system on top of your Notion database, you effectively give your AI assistant the ability to “remember” everything you’ve ever written.

The Architecture at a Glance

The system we’ll build has three layers:

All of this can run on your own machine, giving you full control over your data.

Building the Personal RAG Pipeline with Notion and Python

We’ll use the official Notion API to fetch your database content, sentence-transformers to create embeddings, and Chroma as our local vector store. For generation, we’ll use OpenAI’s API, but you can swap in any local model (e.g., Llama 2 via Ollama) for complete privacy.

Step 1: Setting Up Notion API Access

First, create a Notion integration and get your API key:

Now install the required Python packages:

pip install notion-client chromadb openai sentence-transformers python-dotenv

Create a .env file to store your keys:

NOTION_API_KEY=your_notion_api_key_here
OPENAI_API_KEY=your_openai_api_key_here
NOTION_DATABASE_ID=your_database_id_here

You can find the database ID in the URL of your Notion database page — it’s the 32-character string after the workspace name.

Step 2: Fetching and Chunking Notion Content

Write a Python script to pull all pages from your database. We’ll chunk each page’s text into segments of 512 tokens (roughly 400 words) to improve retrieval granularity.

import os
from notion_client import Client
from dotenv import load_dotenv

load_dotenv()

notion = Client(auth=os.getenv("NOTION_API_KEY"))
database_id = os.getenv("NOTION_DATABASE_ID")

def get_all_pages(database_id):
    results = []
    start_cursor = None
    while True:
        response = notion.databases.query(
            database_id=database_id,
            start_cursor=start_cursor
        )
        results.extend(response["results"])
        if not response["has_more"]:
            break
        start_cursor = response["next_cursor"]
    return results

def extract_text(page):
    # Simplified: extracts plain text from a page's title and rich text properties
    text_parts = []
    props = page.get("properties", {})
    for prop_name, prop_value in props.items():
        if prop_value["type"] == "title":
            for t in prop_value["title"]:
                text_parts.append(t["plain_text"])
        elif prop_value["type"] == "rich_text":
            for t in prop_value["rich_text"]:
                text_parts.append(t["plain_text"])
    return " ".join(text_parts)

def chunk_text(text, chunk_size=512):
    words = text.split()
    chunks = [" ".join(words[i:i+chunk_size]) for i in range(0, len(words), chunk_size)]
    return chunks

pages = get_all_pages(database_id)
all_chunks = []
for page in pages:
    text = extract_text(page)
    if text.strip():
        chunks = chunk_text(text)
        for chunk in chunks:
            all_chunks.append({
                "page_id": page["id"],
                "text": chunk
            })

print(f"Extracted {len(all_chunks)} chunks from {len(pages)} pages.")

This script queries your Notion database, extracts text from title and rich text fields, and splits long pages into manageable chunks. Each chunk retains a reference to its source page.

Step 3: Creating Embeddings and Storing in Chroma

Now we’ll convert each text chunk into a vector embedding using sentence-transformers and store them in Chroma, a lightweight vector database that runs locally.

from sentence_transformers import SentenceTransformer
import chromadb
from chromadb.config import Settings

# Initialize embedding model
embedder = SentenceTransformer("all-MiniLM-L6-v2")  # Fast, good quality

# Initialize Chroma client (persistent storage)
chroma_client = chromadb.PersistentClient(path="./notion_kb_chroma")
collection = chroma_client.get_or_create_collection(name="notion_knowledge_base")

# Create embeddings and add to collection
texts = [chunk["text"] for chunk in all_chunks]
embeddings = embedder.encode(texts, show_progress_bar=True).tolist()
ids = [str(i) for i in range(len(all_chunks))]
metadatas = [{"page_id": chunk["page_id"]} for chunk in all_chunks]

collection.add(
    embeddings=embeddings,
    documents=texts,
    metadatas=metadatas,
    ids=ids
)

print(f"Added {len(texts)} chunks to Chroma vector store.")

That’s it. Your Notion content is now indexed in a local vector database. The all-MiniLM-L6-v2 model is a good balance between speed and accuracy for personal projects.

Step 4: Querying with RAG

When a user asks a question, we:

import openai

openai.api_key = os.getenv("OPENAI_API_KEY")

def query_knowledge_base(question, k=5):
    # Embed the question
    question_embedding = embedder.encode([question]).tolist()[0]

    # Retrieve top-k chunks
    results = collection.query(
        query_embeddings=[question_embedding],
        n_results=k
    )

    retrieved_docs = results["documents"][0]
    context = "\n\n".join(retrieved_docs)

    # Build prompt
    prompt = f"""You are a helpful assistant with access to a personal knowledge base.
Use the following context to answer the question. If the context doesn't contain the answer, say so.

Context:
{context}

Question: {question}

Answer:"""

    # Generate answer
    response = openai.ChatCompletion.create(
        model="gpt-3.5-turbo",
        messages=[
            {"role": "system", "content": "You are a knowledgeable assistant."},
            {"role": "user", "content": prompt}
        ],
        temperature=0.3
    )

    return response["choices"][0]["message"]["content"]

# Example usage
answer = query_knowledge_base("What are the key principles of effective note-taking?")
print(answer)

This function returns an answer grounded in your actual Notion notes. Because we’ve limited the context to 5 chunks, responses are both relevant and token-efficient.

Enhancing Your AI Knowledge Base

The basic pipeline above works, but you can make it significantly smarter with a few tweaks.

Handling Different Content Types

Notion pages often contain images, toggles, and code blocks. For a richer retrieval experience, you can extract text from code blocks using the Notion API’s code property type. Similarly, you can include page titles in the metadata for better source attribution.

Improving Retrieval with Metadata Filtering

Chroma supports metadata filtering. For example, you can tag chunks by category (e.g., “programming”, “philosophy”) and filter queries to specific categories. Add a category field to each chunk’s metadata, then use collection.query(where={"category": "programming"}) to narrow searches.

Using a Local LLM for Full Privacy

If you prefer not to send your notes to OpenAI, you can run a local model like Llama 2 or Mistral via Ollama. Install Ollama, pull a model (e.g., ollama pull llama2), and replace the OpenAI call with:

import requests
import json

def query_local_llm(prompt):
    response = requests.post(
        "http://localhost:11434/api/generate",
        json={"model": "llama2", "prompt": prompt, "stream": False}
    )
    return response.json()["response"]

This keeps everything on your machine — perfect for sensitive information.

Automating Updates

Your Notion database changes over time. To keep your vector store in sync, you can run the ingestion script periodically (e.g., via cron job) or use Notion’s webhook capabilities to trigger updates when a page is modified. For a simpler approach, just re-run the script weekly.

Practical Use Cases and Real-World Value

Once your personal AI knowledge base is running, you’ll find yourself using it in unexpected ways:

This setup effectively turns your Notion workspace into a conversational partner. You stop relying on folder structures and start asking questions in plain English.

Where to Go from Here

Building a personal RAG system is one of the most rewarding AI projects you can tackle — it directly improves your daily workflow. The code above gives you a solid foundation, but the real magic comes from tailoring it to your specific needs. Experiment with different chunk sizes, try other embedding models, or add a simple web interface using Streamlit.

If you’re looking for more hands-on guides and original projects like this one, visit www.aiflowyou.com. You’ll find a growing library of tutorials covering AI agents, RAG systems, and practical Python automation. Also, don’t forget to check out the WeChat Mini Program "AI快速入门手册" — it’s a handy mobile companion for learning AI concepts on the go.

Your notes are a goldmine. Now you have the tool to mine them intelligently.

More AI learning resources at aiflowyou.com →

Mini Program QR

Scan to open Mini Program

WeChat QR

Scan to add on WeChat