Build Your Own AI Chatbot with JavaScript

Build Your Own AI Chatbot with JavaScript

A Beginner’s Guide to Using OpenAI and Gemini Models

Introduction

In the previous session, we explored various prompting techniques. Now, let’s put those ideas into action by building a chatbot from scratch. This chatbot will use the prompting skills we’ve picked up before – and you’ll see how easy it is to get started. No matter which programming language you know, if it supports REST API calls, you can build a chatbot. Here, we’ll focus on JavaScript as our main tool.

To power the chatbot’s intelligence, we’ll call modern LLM APIs, like OpenAI (paid, recommend) or Gemini (often has a free tier). All sample code works with both models, so pick whichever suits you best.

In JavaScript, we can use raw REST fetch requests or take the easier route with packages like the official Let’s Codeopenai library. This not only supports OpenAI, but you can adapt similar code to other models like Gemini and DeepSeek


What You’ll Need

  • Basic knowledge of Javascript.

  • An API key from OpenAI (get $5 credits for a start) or a free Gemini model key.

  • Node.js


Let’s Write the Code

First, install the OpenAI package we’ll use to communicate with the AI:

npm install openai

Let create an Agent, The Agent is your chatbot’s brain — an object that will send and receive messages.

const { OpenAI } = require("openai")

const openai = new OpenAI({
    // Replace with your actual API key stored in .env
    apiKey: process.env.OPENAI_API_KEY,
})

Now we have created an OpenAI agent, which will communicate with OpenAI models.

Next, let’s send messages to the agent. We will create a function that sends the user’s input to the chatbot and gets the response. We will be using gpt-4o-mini model but you can try any model provide by the openAI

async function sendMessage(userMessage) {
  const response = await openai.chat.completions.create({
    model: "gpt-4o-mini",
    messages: [{ role: "user", content: userMessage }],
  });

  // Return the bot's reply
  return response.choices[0].message.content;
}

That’s it! You can call this function with a message, and it will return the model’s response.

const botReply = await sendMessage("Hello, my name is Rohit.");
console.log(`Bot reply: ${botReply}`);
// Bot reply: Hello, Rohit! How can I assist you today?

We’ll pause here so you can try this code.


Questions You Might Have**…**

  • Is this all there is to building a chatbot?

  • Why do we use role: "user" in the message?

  • Can the model remember what I said earlier?

  • And many more…

Let’s answer these as we move on.


Understanding Chat Message Roles

When chatting with the LLM model, every message has a role to tell the model who is speaking.

There are four types of roles:

  • System
    This message gives instructions on how the model should behave, including tone, personality, or special rules. It should be the first message sent to the model.

  • Developer
    These are instructions from the app developer. They provide system rules or business logic, like function definitions.

  • User
    Messages from the end user. These are inputs or questions.

  • Assistant
    Replies generated by the model.

Let’s tell the AI to be friendly and give it a name:

async function sendMessage(userMessage) {
  const response = await openai.chat.completions.create({
    model: "gpt-4o-mini",
    messages: [
      { 
        role: "system", 
        content: "You are a friendly assistant named Alexa" 
      },
      { role: "user", content: userMessage },
    ],
  });

  return response.choices[0].message.content;
}

console.log(await sendMessage("Hi, I'm Rohit. What's yours?"));
// Hi Rohit! I’m Alexa. How can I assist you today?

We can’t just use a user role as a system role because the system role has higher priority. The system prompt tells the LLM which instructions should never be changed, even if the user tries to override them. For example, if we set in the system prompt: “Your name is Alexa, and this cannot be changed,” then the model will always follow that, no matter what the user says.


Making Your Chatbot Remember

Right now, your chatbot can answer questions — but it has no memory of what was said before.

For example, try this:

console.log(await sendMessage("Hi, I'm Rohit. What's yours?"));
console.log(await sendMessage("Do you remember my name?"));

// Hello, Rohit! I'm Alexa. How can I assist you today?
// I don’t actually have the ability to remember names.

You’ll notice that the second response doesn’t remember your name.

That’s because LLMs are stateless — they don’t automatically keep track of previous messages.

So, to make your chatbot feel smarter, we need to give it memory by saving previous messages and sending them back to the model along with new ones.

// Store conversation history here
const chatHistory = [{ 
    role: "system", 
    content: "You are a friendly assistant named Alexa" 
}];

async function sendMessage(userMessage) {
  // Add user message to history
  chatHistory.push({ role: "user", content: userMessage });

  // Send full history to the model
  const response = await openai.chat.completions.create({
    model: "gpt-4o-mini",
    messages: chatHistory,
  });

  // Add assistant reply to history
  const botMessage = response.choices[0].message.content;
  chatHistory.push({ role: "assistant", content: botMessage });

  return botMessage;
}

Now, if you ask the bot if it remembers your name, it will, thanks to the chat history!
We’ll explore memory in more depth in the future.


Using Gemini Models with the Same Code

If you want to try Google’s Gemini models instead of OpenAI, you only need to make a couple of changes – the rest of your chatbot code stays almost the same!

when creating your agent, switch in your Gemini API key and set a special base URL for Gemini:

const openai = new OpenAI({
  // Use gemini api key
  apiKey: process.env.GEMINI_API_KEY,
  baseURL: "https://generativelanguage.googleapis.com/v1beta/openai/",
});

Now, whenever you call the openai.chat.completions.create function, simply use a Gemini model name, such as "gemini-2.5-flash":

async function sendMessage(userMessage) {
  const response = await openai.chat.completions.create({
    model: "gemini-2.5-flash", //  <<<< use gemini models
    messages: [{ role: "user", content: userMessage }],
  });

  // Return the bot's reply
  return response.choices[0].message.content;
}

That’s it! You can swap models as needed (OpenAI or Gemini) just by changing the API key, baseURL, and model name.


Wrapping Up Your Chatbot Journey

Awesome! ✨ You’ve just created your own AI-powered chatbot using JavaScript and modern language models like OpenAI. We covered the basics from setup to writing code that sends messages, handles conversation roles, and even remembers chat history.

Try writing your own chatbot code following the steps we covered, and experiment with different prompts and models.

Happy coding — your AI journey is just beginning!