Getting Started with OpenAI Chat Completions

Getting Started with OpenAI Chat Completions

OpenAI
Prompt
Chat
AI
API
Chatbot
2024-10-13

I’ve been exploring the world of AI-powered chat applications. If you’ve been curious about building a conversational chatbot using OpenAI’s Chat Completions API, this post is for you.

I’ll walk you through how to set up your environment, configure the API, and implement chat completions in a variety of ways. By the end, you’ll have a solid foundation to build more complex, responsive, and context-aware AI chatbots.


First things first, let’s initialize a new Node.js project and install the dependencies we’ll need.

npm init -y npm install ai @ai-sdk/openai

This will create a package.json file and install both the ai library and the @ai-sdk/openai integration plugin.

Next, you’ll need to provide your OpenAI API key so the SDK can communicate with OpenAI’s endpoint. Create a .env file in your project root directory and include the following:

OPENAI_API_KEY=your_api_key_here

Make sure to replace your_api_key_here with your actual key from the OpenAI dashboard. Keep this key secret—don’t commit it to version control!

Let’s start simple. Below is a script that prompts OpenAI with a question and returns the AI-generated response:

import { generateText } from "ai" import { openai } from "@ai-sdk/openai" import dotenv from "dotenv" dotenv.config() async function chatCompletion() { try { const { text } = await generateText({ model: openai("gpt-4o"), messages: [ { role: "system", content: "You are a helpful assistant." }, { role: "user", content: "What is the capital of France?" } ] }) console.log("AI response:", text) } catch (error) { console.error("Error:", error) } } chatCompletion()

When you run node chat.js (assuming you name the file chat.js), you should see the AI’s response in the console.

One of the most exciting features of Chat Completions is support for streaming results. This allows you to display the AI’s output as it’s generated:

import { streamText } from "ai" import { openai } from "@ai-sdk/openai" import dotenv from "dotenv" dotenv.config() async function streamingChatCompletion() { try { const stream = streamText({ model: openai("gpt-4o"), messages: [ { role: "system", content: "You are a helpful assistant." }, { role: "user", content: "Tell me a short story about a ninja." } ], onChunk: (chunk) => { if (chunk.type === 'text-delta') { process.stdout.write(chunk.text) } } }) // Wait until the stream completes before exiting await stream.text console.log("\nStreaming completed.") } catch (error) { console.error("Error:", error) } } streamingChatCompletion()

Using streamText, chunks of text are sent in real-time, making for a more interactive conversation flow. This is particularly valuable if you’re building a live chat UI.

If you’re building a real chatbot, you’ll want to preserve the chat history so the AI can build context from previous messages. Here’s a small script that demonstrates how to handle multiple turns in a conversation:

import { generateText } from "ai" import { openai } from "@ai-sdk/openai" import dotenv from "dotenv" dotenv.config() async function conversationSimulation() { const messages = [ { role: "system", content: "You are a helpful assistant." }, { role: "user", content: "Hi, I'm planning a trip to Paris." }, ] try { for (let i = 0; i < 3; i++) { const { text } = await generateText({ model: openai("gpt-4o"), messages: messages }) console.log("AI:", text) messages.push({ role: "assistant", content: text }) // Simulate user response const userResponses = [ "What are some must-visit attractions?", "How many days should I plan for my trip?", "Thank you for your help!" ] console.log("User:", userResponses[i]) messages.push({ role: "user", content: userResponses[i] }) } } catch (error) { console.error("Error:", error) } } conversationSimulation()

Notice how I push the AI’s responses into the messages array as assistant role messages, so the conversation context is preserved across multiple interactions.


  • Protect Your API Key: Always use a .env file or a secure environment variable manager. Never commit your secrets to GitHub or any public repository.
  • Model Selection: Different models (e.g. gpt-3.5-turbo, gpt-4) have different capabilities and costs. Experiment to find the right fit for your application’s needs and budget.
  • Prompt Engineering: The system message is key to guiding the AI’s behavior. Experiment with different system messages to shape the personality, tone, or format of the responses.
  • Monitor Token Usage: Each request has a token cost. Make sure to monitor usage via your OpenAI dashboard to avoid unexpected charges.
  • Graceful Error Handling: Always wrap your calls in try/catch blocks and handle rate-limit or API errors gracefully. You may also want to implement retries with backoff for a more robust production setup.
  • Insufficient System Context: If you only provide user messages, the AI may produce off-topic or inconsistent answers. A carefully crafted system message helps keep the conversation on track.
  • Forgetting to Append the Assistant’s Response: If you don’t add the AI’s latest response to the conversation history, it won’t have context for subsequent messages.
  • Mismatched Dependencies: Ensure your ai and @ai-sdk/openai versions are compatible. Updating one without the other can cause runtime errors.

Congratulations! You’ve taken your first steps into the world of AI chatbots with OpenAI Chat Completions. You now know how to set up your environment, configure your API key, generate basic responses, stream partial outputs, and maintain multi-turn context.

With these fundamentals, you can experiment further by adding user interfaces, integrating your chatbot into a web or mobile application, or applying more advanced features like role-based instructions. If you get stuck or want to dig deeper, be sure to check out the OpenAI documentation for more details.

Thanks for joining me on this journey, and I hope you have fun exploring what’s possible with Chat Completions. If you have any questions or suggestions, feel free to reach out. Happy coding!