Build and Deploy Your Personal Jarvis ChatGPT Bot in Python with ChatGPT API on MacOS

Tattooed Geek
Nerd For Tech
Published in
13 min readJul 16, 2023

--

The Model’s high-level overview | Designed in Canva by Anish Singh Walia

Supercharge Your Productivity with an AI Terminal Assistant!

If you love shell scripting and spend most of your time on terminal/shell like me, this post is definitely for you. You will have your own personal Shell Assistant that can give you answer to anything and make you more productive

INTRODUCTION

Are you ready to harness the limitless potential of ChatGPT? Imagine having an AI-powered terminal/Shell assistant that provides instant support, automates complex tasks and saves valuable time. Need a tricky awk or Linux/shell command? Your assistant has got you covered!

With the gpt-3.5-turbo model, your Personal ChatGPT bot not only trains on the data you provide but also leverages the vast amount of knowledge from the extensive data it has access to. The possibilities are endless!

NOTE: To add more value and as a USP of my blog, at the end of this blog post, I have designed and attached a cheat sheet/carousel of the model discussed here for you to use and share.

I’d love to share the high-quality PDF version of my blog’s cheatsheets with you — for free! Since Medium doesn’t allow uploading PDFs, I post them daily on my LinkedIn page. Let’s connect and stay updated @ https://www.linkedin.com/in/anish-singh-walia-924529103/

Also, I’ll share this month’s bonus tip or best productivity tools that are cheap, effective, and a game changer, which I personally use, prefer, and insist you all try. So do check them out and use them.

Here is the Bonus tip for you all:

1) NOTION:

Bonus Tip 1: One great AI all-in-one Productivity/Task management tool I recently started using is Notion. Over the past few months, Notion has become famous and my absolute favorite.

If you’re like me, Juggling work, daily tasks, notes, and projects is tough. Multiple tabs for email, Slack, and Google Docs make it overwhelming. I personally use Notion AI, which streamlines everything in one place. It’s a game-changer, and you won’t regret using it.

I’ve been using its PRO version for a while now, and I must say, it’s been a complete game-changer for me. With almost every online co-working tool integration you can think of, it makes my daily work routine a breeze.

Plus, the pricing is unbeatable/cheapest for the tonnes of features it provides compared to all other all-in-one AI productivity tools I have used. I have taken up the annual subscription for mere 8$/month. Another awesome tool which is litreally dirt cheap.

I love its Web Clipper and use its Mac app, and I also use Notion on my phone. You can download the desktop app from here.

Do check out this cool post about Notion to know more about this brilliant platform.

Best all-in-one AI Productivity tool for this month

2)QUILLBOT:

Bonus Tip 2: One great AI Productivity Writing tool I recently started using for day-to-day writing and tasks such as plagiarism checker, grammar checker, Quillbot-Flow , paraphraser, summariser, and translator is QuillBot .

I wanted to try something similar and cheaper than Grammarly.

I took up its yearly premium for around $4/month(58 % off). The price is literally dirt cheap compared to other writing and prductivity AI tools I have used in the past.

Personally, it’s UI and UX is very simple and easy to use. So I just wanted to share this awesome, productive tool with you all. Do check it out and use it in your day-to-day writing tasks.

https://try.quillbot.com/

Best Productivity Writing tool for this month

In just a few lines of Python code, you can have an assistant who answers any question, provides solutions, and gives personalized advice from your terminal or shell. It’s like having your very own AI sidekick!

Boost your productivity, streamline your workflows, and elevate your AI experience. So follow along and keep reading till the end.

Shell/Terminal

OpenAI’s ChatGPT API, fueled by the cutting-edge gpt-3.5-turbo model opens doors to this transformative technology. In this tutorial, I’ll guide you through the exhilarating process of building and deploying your Personal ChatGPT bot on your Macbook/Linux, which trains on the type of data you provide, along with the power of the immense amount of data that gpt-3.5-turbouses. In just a few lines of python code. It's like having your assistant.

By the end of this post, you should be able to ask questions like below which could make your life a lot more productive and effective if you are someone who spends most of your day on Shell/terminal. So follow along till the end.

Your personal bash assistant to make you more productive

Table of Contents:

  1. Pre-requisites
  2. Setting up the Environment
  3. Obtaining OpenAI API access
  4. Create a /data Directory in the project’s current working directory
  5. Building the ChatGPT Bot
  6. Deploying the ChaGPT bot
  7. Customizing and Enhancing Your ChatGPT Bot
  8. Conclusion

Model’s architecture and a high-level overview:

The Model’s high-level overview | Designed in Canva by Anish Singh Walia

Let’s dive in!

AI is the new revolution

PRE-REQUISITES

Before diving into the implementation, ensure you have the following:

  • A Mac/Linux system(this tutorial is specifically tailored for macOS users)
  • Python 3.7 or higher installed
  • Basic knowledge of Python programming
  • An OpenAI account with access to the ChatGPT API

Step 1: Setting Up the environment

We need to set up a Python environment and install the necessary libraries. Open a terminal and create a new directory for your project.

$ mkdir chatgpt-bot
$ cd chatgpt-bot
$

Here, we will use the three crucial libraries: OpenAI,textract and globto implement this.

OpenAI is a leading artificial intelligence research organization that has developed the ChatGPT API, which allows us to interact with the powerful ChatGPT model. With the OpenAI API, we can send prompts and receive responses from the ChatGPT model, enabling us to create conversational chatbots.

You can learn more about OpenAI and its offerings here.

The second textract Python library package provides text extraction capabilities from various file formats. It supports a wide range of file formats, including but not limited to:

  1. Text-based formats: TXT, CSV, JSON, XML, HTML, Markdown, and LaTeX.
  2. Document formats: DOC, DOCX, XLS, XLSX, PPT, PPTX, ODT, and ODS.
  3. eBook formats: EPUB, MOBI, AZW, and FB2.
  4. Image formats with embedded text: JPG, PNG, BMP, GIF, TIFF, and PDF (both searchable and scanned).
  5. Programming source code files: Python, C, C++, Java, JavaScript, PHP, Ruby, and more.

The glob package in Python is a built-in module that provides a convenient way to search for files and directories using pattern matching. It allows you to find files matching a specified pattern, such as all files with a particular extension or specific naming patterns.

Next, let’s install the required Python libraries:

$ pip install openai glob textract

Step 2: Obtaining OpenAI API Access

To use the ChatGPT API, you’ll need an OpenAI API key. If you don’t have one, sign in to your OpenAI account and generate an API key from the dashboard.

Once you have the API key, save it securely as an environment variable in your terminal. An environment variable is a variable that is set on your operating system, rather than within your application.

Securing your API keys is paramount, and the post below explains various ways to secure your OpenAI API keys.

To make the above OPENAI_API_KEY Persistent in your Linux/macOS machine’s environment variable use:

1. Run the following command in your terminal, replacing your key with your API key.

echo "export OPENAI_API_KEY='yourkey'" >> ~/.zshrc

2. Update the shell with the new variable:

source ~/.zshrc

3. Confirm that you have set your environment variable using the following command from the terminal.

echo $OPENAI_API_KEY

Step 3: Create a /data Directory in the project's current working directory

Add all personal files there, containing anything from .txt to any .csv, or bot.docx files. The model will use the data inside this directory to answer your personal questions based on the data you store here. For example, I have created a .txt file inside this directory and added my resume:

The /data directory which contains your data
The /data directory, which contains your data
My data inside the My_data.txt file

Step 4: Building the ChatGPT Bot

Now, let’s write the code for our ChatGPT bot. Create a new Python file, such as chatGPTbot.py, in the current working directory of your Project, and write the following code logic. I urge the readers to understand the code and follow along, and if you have any doubts add in the comments section.

import os
import glob
import openai
import textract

class Chatbot:
def __init__(self):
self.openai_api_key = os.getenv("OPENAI_API_KEY")
self.chat_history = [] # chat_history list to keep the chat in memory of the model

def append_to_chat_history(self, message): #appends the user's message to the chat history stored in the chat_history list.
self.chat_history.append(message)

def read_personal_file(self, file_path):
try:
text = textract.process(file_path).decode("utf-8") # convert the content inside files to plain text
return text
except Exception as e:
print(f"Error reading file {file_path}: {e}")
return ""

def collect_user_data(self): # collect local personal data to feed the model
data_directory = "./data"
data_files = glob.glob(os.path.join(data_directory, "*.*")) # The function "glob.glob(os.path.join(data_directory, "*.*"))"
# is used to retrieve a list of file paths that match a specified pattern within a given directory.
# In this case *.*", which matches all files with any extension.

user_data = ""
for file in data_files:
file_extension = os.path.splitext(file)[1].lower()
if file_extension in (".pdf", ".docx", ".xlsx", ".xls"): #checks for the file extension
user_data += self.read_personal_file(file) # convert the content of files to plain text and append
else:
with open(file, "r", encoding="utf-8") as f: # the "with" statement is used to simplify exception handling and resource management when working with files
user_data += f.read() + "\n"
# the "open" function is called with the file name and mode ('r' for read). The returned file object is then assigned to the variable `user_data`.
# The code inside the "with" block reads the contents of the file and prints it
return user_data

def create_chat_response(self, message):
self.append_to_chat_history(message) # appends the user's message to the chat history stored in the chat_history list.

user_data = self.collect_user_data()
messages = [
{"role": "system", "content": "You are the most helpful assistant."}, # provides high-level instructions or context-setting messages
{"role": "user", "content": message}, # user” role represents the messages or queries from the user
{"role": "assistant", "content": message}, # “assistant” role represents the responses generated by the ChatGPT model
]

if user_data:
messages.append({"role": "user", "content": user_data})

# the main function that runs the ChatGPT model on the given parameters
response = openai.ChatCompletion.create(
model="gpt-3.5-turbo",
messages=messages,
temperature=0.7,
max_tokens=256,
top_p=0.9,
n=2,
stop=None,
frequency_penalty=0.9,
presence_penalty=0.9
)

self.append_to_chat_history(response.choices[0].message.content.strip())
return response.choices[0].message.content.strip()
# add responses by model to model's memory/chat history to make the bot more interactive and intelligent

def start_chatting(self):
while True:
user_input = input("User: ")
if user_input.lower() == "exit":
print("Jarvis: Goodbye boss, have a wonderfull day ahead!")
break
bot_response = self.create_chat_response(user_input)
print("Chatbot:", bot_response)

# Create an instance of the Chatbot class and start the conversation
chatbot = Chatbot()
chatbot.start_chatting()

The functions defined above do the following:

Firstly, the model’s parameters, in a nutshell, do this:

Temperature: Controls the randomness of the responses. Higher values (e.g., 1.0) make the output more diverse, while lower values (e.g., 0.2) make it more focused and deterministic.

Max Tokens: Limits the length of the response generated by the model.

Top P: Specifies the cumulative probability threshold for choosing the next token. Higher values (e.g., 0.9) result in more focused responses.

N: Determines the number of different responses generated by the model, which helps in exploring different possibilities.

Stop: Allows us to specify a stopping phrase to indicate the end of the response.

Frequency Penalty: Controls the model’s likelihood of repeating the same response.

Presence Penalty: Controls how much the model considers using a token that hasn’t been mentioned in the conversation.

Find more about fine-tuning these parameters here.

append_to_chat_history(message): This function appends the user's message to the chat history stored in the chat_history list.

read_personal_file(file_path): This function utilizes the textractLibrary to extract text from personal files. It attempts to decode the extracted text using UTF-8 encoding. An error message is displayed if any errors occur during the extraction process.

collect_user_data(): This function collects the user's data stored in the "/data" directory, placed inside the current working directory. It iterates through the files in the directory, determines their file types, and uses the appropriate method to extract text. It returns the combined user data as a string. The function glob.glob(os.path.join(data_directory, “*.*”))
is used to retrieve a list of file paths that match a specified pattern within a given directory. In this case *.*, which fits all files with any extension.

create_chat_response(message): This function constructs the chat response using the OpenAI ChatCompletion API. It appends the user's message and the collected user data (if any) to the message list. The API call is made with the provided messages, and the response is stored in the response variable. The function then appends the response to the chat history and returns it.

More details about the OpenAI chat completion API, the model’s parameters, and fine-tuning the model’s parameters can be found here on another blog post I wrote recently.

start_chatting(): This function initiates an interactive chat session with the user. It prompts the user for input, generates the bot's response using create_chat_response(), and prints the response. The conversation continues until the user enters "exit" to quit.

In the end the while True loop continuously prompts the user for input. To exit the chatbot, type "exit."

More details about the ChatGPT-3.5-turbo model and API docs can be found below:

Step 5: Deploying the ChatGPT Bot

To run the program, you must open a terminal and execute the Python file. In your terminal, run the following command:

$ python chatGPTbot.py        

Or

$ python3 chatGPTbot.py
The model answers based on the data I stored inside /data directory

Voila! Your personal ChatGPT bot is now ready to chat. Start interacting with it by entering messages; the bot will respond accordingly. When you’re finished, type “exit” to end the conversation.

Step 6: Customizing and Enhancing Your ChatGPT Bot

  1. Context and History: You can extend the chat history in the create_chat_response() function to include previous user-bot interactions. This allows the bot to have a better understanding of the conversation history.
  2. System Prompts: Experiment with different system prompts to influence the bot's behavior. For example, you can try using prompts like “Translate the following English text to French:” or “Help me with the following Python code:
  3. User Instructions: Refine your user instructions to be more explicit and precise. Specify the desired format or ask the bot to think step-by-step before answering.
  4. Experiment with different parameters, such as temperature and max tokens, to customize the behavior of your chatbot.

More details about the above parameters and what values to set can be found here in my other blog post.

Conclusion:

Congratulations! You have learned how to create and deploy a powerful ChatGPT bot on your Mac/Linux machine using Python. The provided code allows your bot to consider and utilize personal user data from various file formats, enabling a more personalized user experience.

Feel free to customize further and enhance your bot’s capabilities. Now it’s time to unleash your creativity and build amazing chatbots. Do add in the comments section about your use-case and how you are using this personal chatGPT bot.

NOTE:

Remember to handle personal data responsibly and comply with privacy regulations. Remember to secure your OpenAI API key and use it wisely.

You can integrate it with other platforms, or build a web-based chatbot. With the versatility of ChatGPT and the simplicity of Python, the possibilities are endless.

Happy coding and happy chatting with your very own ChatGPT bot!

Please Subscribe and Follow to get Free access to my newsletter and keep yourself updated on the latest AI and ChatGPT trends and technologies to make your lives easier and more productive, save money, and be effective at whatever you do.

Your support motivates me to keep researching, designing cheatsheets, and writing about such topics.

You can find the GitHub repository for this project here on my GitHub profile. Follow me on Medium and my Github to show some support and for some fantastic AI and ChatGPT-related content coming up.

Comment here if you have tried anything like this before and how you use ChatGPT to be more productive.

Connect with me on my social profiles like LinkedIn & Github. And let’s collaborate.

(Note: The code provided in this tutorial is just a starting point. Implementing appropriate error handling, security measures, and moderation systems is essential when deploying chatbots in real-world scenarios.)

--

--

Tattooed Geek
Nerd For Tech

Main-Blog:https://medium.com/@anishsingh20 / | Medium Top Writers(India) | Solopreneur | Founder@DataInksights | Medium 150,000+ views/70,000+ Reads - monthly