What if you could build your own private GPT and connect it to your own knowledge base; technical solution description documents, design documents, technical manuals, RFC documents, configuration files, source code, scripts, MOPs (Method of Procedure), reports, notes, journals, log files, technical specification documents, technical guides, Root Cause Analysis (RCA) documents etc. Well, it’s possible and am about to show you how.
In this article, we are going to build a private GPT using a popular, free and open-source AI model called Llama2. We shall then connect Llama2 to a dockerized open-source graphical user interface (GUI) called Open WebUI to allow us interact with the AI model via a professional looking web interface. Using Open WebUI with RAG (Retrieval Augmented Generation), we shall create a “Knowledge Base” and upload the documents we want Llama2 to consult when answering our questions. And all this will be hosted locally on your PC or server running on CPU or GPU. Please note that having a GPU is an added advantage because AI models perform better on servers using GPUs compared to servers that use CPU. To successfully accomplish this task, here is what I believe is the basic requirements. Surprisingly simple but it works!
- A server or PC; with CPU/GPU (GPU is preferred) @ 1.30 to 1.50 GHz, 16GB RAM, 500GB Storage
- An internet connection; Only during installation, setup and upgrades. No internet connection is needed to run and use your private AI.
- OS; Linux OS or Mac OS are preferred, but the setup can also run on Windows when combined with WSL (Windows Subsystem for Linux). I will be running this demo on Linux Ubuntu 22.04.
- Ollama API; An open-source platform that simplifies the process of installing and running large language models (LLMs) locally on consumer hardware. It acts as a bridge between the user’s hardware and the complex requirements of running LLMs, making the process more accessible.
- Docker; A lightweight, standalone package that includes everything needed to run a piece of software, including code, runtime, system tools, system libraries and settings. Docker containers simplify deployment and ensure consistency across different environments.
- Llama2; a large language model (LLM) developed by Meta and Microsoft. It is open-source and available for commercial use, allowing developers to build upon it and create custom applications.
- Open WebUI; a user-friendly interface for Llama, this will be deployed via Docker.
- A collection of your documents; PDFs, Word, txt, etc. These will form your knowledge base.
But before we dive deep into the details of this demo, why build a private AI server when you can access public AI like ChatGPT, Copilot, or Gemini?
- Data privacy and security; a local and self-hosted AI gives you unparalleled privacy and control. In a scenario where you are working with private and confidential information for example when dealing with proprietary information, a private AI puts you in control of your data.
- Private AI is customizable and adaptable; using a process known as fine-tuning, you can adapt a pre-trained AI model like Llama2 to accomplish specific tasks and explore endless possibilities. This happens through training the model on new and specialized data. Also, through a technique known as RAG (Retrieval Augmented Generation), you can connect the LLMs to access your personalized and private knowledge sources like documents or databases to enhance the model’s response and provide more accurate information.
- Empowering individuals and businesses; by leveraging local LLMs and RAG, Individuals and businesses can create their private and customized knowledge bases, question-answering chat bots based on personalized documents and company’s data without relying on external services. This is truly an empowering experience.
- No internet connection needed; yes, you can run a private AI offline without the need to connect to external servers. This enables enterprises to run and manage their own private AI infrastructure, including the deployment and fine-tuning of LLMs on their on-premises hardware.
Let’s also spend some time and understand the most commonly used terms in AI and the different building blocks of private AI before we get on with the demo.
What is an AI Model?
An AI model is a computer program trained on a large dataset to recognize patterns and make predictions or decisions. These models can perform tasks like text generation, image generation, image recognition, language translation, games, chat, etc. Think of an AI model like a sniffer dog trained to detect specific scents, an AI model is trained to identify patterns in data and understanding what AI models are is fundamental to understanding how AI works and what its limitations are.
What is RAG?
RAG (Retrieval-Augmented Generation) lets your AI access your internal data for accurate and relevant answers. It’s like having a research assistant built into your AI. RAG gives you real-time accuracy with your personal, private and confidential data by connecting AI to your knowledge base. RAG sort of eliminates the need for model retraining because it allows the pre-trained model to consult your private data before answering which maintains accuracy and enhances responses.
What is Hugging Face?
“Hugging Face” is a platform and community for sharing and collaborating on machine learning models, datasets, and related resources. It hosts a vast collection of pre-trained models (over one million plus models and counting at the time of this article) that can be used for various tasks (e.g. Llama2 which we shall be using in this demo). Think of “Hugging Face” as a GitHub for AI models, where developers can share and collaborate on code, but for AI. Hugging Face makes it easy to access and use existing AI models, lowering the barrier to entry for developing AI applications. Hugging face is the home of AI models.
With this background information we are now ready to setup and run Llama2 with Open WebUI, follow the detailed steps below (You will need an active internet connection during this setup to allow access to the online software repos, after which you can run your private AI without internet):
A) Install a Compatible Operating System (Linux or macOS preferred)
Install a compatible OS (macOS or Linux), Windows OS is supported when combined with WSL (Windows Subsystem for Linux). For this demo, we shall use Ubuntu Linux 22.04 which belongs to the Debian distro family. Also remember to update/upgrade your OS, this ensures that your OS has the latest software updates and patches before adding applications.
These commands are used to update and upgrade your OS to the latest software patches:
# apt update
# apt upgrade -y
B) Download and install Ollama API Service
Ollama from ollama.com is a tool for managing and running large language models locally. It facilitates the download and execution of models like Llama2, ensuring they are readily available for use within the Open WebUI.
The following command will download, install and setup ollama on your server.
The following command will test and verify your Ollama installation.
# curl http://127.0.0.1:11434
C) Download and install Llama2 AI model
Llama2 will be the base for our private AI project and installing Llama2 on Linux is surprisingly easy with Ollama API. Just one command and you’re ready to start interacting with Llama2 AI model.
The above test is successful and at this stage we are interacting with Llama2 AI model but only via CLI. The next step is to install Open WebUI and interact with the model using a graphical interface.
D) Download and install docker engine
Because we want to dockerize Open WebUI (run it in a docker container), we shall first install and setup docker environment before installing the Open WebUI. For detailed information on how to install docker engine on Linux, please refer to the official docker website docs. For this project, am using Ubuntu and here is a detail guide on how to install docker engine on Ubuntu using “apt” repo.
i) Set up docker’s “apt” repo
# Add Docker’s official GPG key:
# Add the repository to Apt sources:
ii) Install docker engine and its goodies:
The following command will download and install docker engine and its dependencies.
iii) Test Docker Engine installation by running the hello-world image. When the container runs, it prints a confirmation message and exits.
The following command downloads a test image and runs it in a container.
# docker run hello-world
E) Download and install Open WebUI Container
Open WebUI will be your private AI control panel and provides a beautiful GUI for Llama2 in this project. It is an extensible, feature-rich, easy-to-navigate, self-hosted WebUI designed to operate entirely offline. It supports various LLM runners, including Ollama and OpenAI-compatible APIs. You can chat, upload files, and manage settings using this GUI. For detailed information, be sure to check out Open WebUI documentation on their official website or GitHub. We shall install Open WebUI with bundled Ollama support. i.e. the deployment uses a single container image that bundles Open WebUI with Ollama, allowing for a streamlined setup via a single command.
The following command will spin Open WebUI container instance, connect it to Ollama API service and make the GUI accessible on your server IP address on port 8080.
The following command will verify that the Open WebUI container is running:
# docker ps
F) Access Open WebUI Portal, Configure Admin Setting, Upload Documents and Enable RAG(if needed)
Open a web browser and navigate to your server IP address on port 8080 e.g. http://a.b.c.d:8080 If you have a firewall enabled, make sure you have created firewall policies to allow access from external IPs. Explore Open Web UI features, including model selection, chat, file uploads, and admin settings. Begin interacting with Llama2 through the Open WebUI, ask questions and verify that the model retrieves and generates responses based on your documents.
To create an account, click on “Sign up”, the first account created will automatically become the admin account. When logged in you can change the model on the top left corner from the default “Arena Model” to “Llama2”:
Click on the account icon in the top right corner to access the portal settings.
To create your first knowledge base, Click the three lines menu on the top left corner, and select “workspace”. In the workspace menu, select “Knowledge” and click on the “+” in the right top corner.
Give your knowledge base a name, a good description and then click “Create Knowledge”
Select the knowledge base you created, and click on the “+” to the right of the “Search Collection” menu. On the pop-up menu, select Upload files. Browse to the file you want and click upload. To interact with the files you uploaded, click on “New Chat” in the top left corner, this will open a new chat with Llama2, begin your chat with a # sign, this will pop-up your knowledge base, select it to add it to the chat box and begin interacting with it, asking questions. You can add more files to the chat and ask questions about your own documents, personal notes, journals, company documents, technical guides, and more hence empowering personal knowledge management and this is the true power of private AI when combined with Open WebUI and RAG. Private AI is the future, putting you in control of your data and your AI.
You can also connect your private AI to Obsidian for enhanced note-taking. The Obsidian plugin (BMO Chatbot) for AI-powered note-taking is another game-changer for productivity. It can assist you to get contextual help and generate content within your notes. Another very interesting and powerful application of private AI is local image generation with Stable Diffusion, you can run Stable Diffusion with Automatic1111 UI or you can integrate it with Open Web UI for seamless image generation within the chat interface, It’s incredibly fast and easy to use but that will be a demo for another day. I hope this article has inspired you to build your own private AI, let me know in the comments section by sharing and suggesting new project ideas. Let’s explore the world of AI together.