Eliza-GPT: The Classic ELIZA Chatbot Running On OpenAI's Chat Completions API

Posted by
on under

ELIZA is possibly the first chatbot ever created, dating back to 1966. It was created by Joseph Weizenbaum as an early experiment in natural language processing (NLP). ELIZA is able to hold a conversation in English with a human, and is programmed through a set of pattern matching rules to respond to the user in ways that are similar to how a psychotherapist would.

The OpenAI Chat Completions API is a widely used API to chat with Large Language Models (LLMs) such as ChatGPT, and has become a sort of standard for turn-based conversational services.

Could 1960s ELIZA be adapted to work as a web service that any application designed as an OpenAI client can use? The answer is Yes! Why would you do that? Keep reading to find out...

What is ELIZA?

In case you have never seen the types of conversations that ELIZA can have with a person, here is a short excerpt from one:

Human: Men are all alike.
ELIZA: In what way?
Human: They are always bugging us for something or other.
ELIZA: Can you think of an example?
Human: Well, my boyfriend made me come here.
ELIZA: Your boyfriend made you come here?
Human: He says I'm depressed much of the time.
ELIZA: I am sorry to hear you are depressed.
Human: It's true. I'm unhappy.
ELIZA: Do you think coming here will help you not to be unhappy?

In 1966 this was mindblowing. But what's the point of reviving this ancient tech in the age of ChatGPT?

First of all, I thought it would be a fun exercise to put ELIZA behind a modern chat API that is compatible with OpenAI applications and as a result become more familiar with this popular API.

This can be useful as well. Lots of people are writing applications that integrate with LLMs these days, so having a tiny and free service that can be a stand-in for one makes sense to me. I imagine there are a lot of situations in which you care more about making the integration with the chatbot work than the quality of the chatbot's responses, and in those cases you could run your own compatible service powered by ELIZA and get decent responses without having to pay OpenAI or another of the "Big-AI" companies. Automated testing comes to mind as something that could benefit from a "poor man's LLM" solution that is self-hosted and free.

Introducing Eliza-GPT

Let me show you how you can set up your own private ELIZA service with a little project I created called Eliza-GPT.

Start by installing it:

$ pip install eliza-gpt

(Note that I'm not showing here how to create a Python virtual environment, but I strongly recommend that you do that and install the package in it.)

Now you are ready to start the chatbot service:

$ eliza-gpt
Eliza-GPT is running!
Set base_url="http://127.0.0.1:5005/v1" in your OpenAI client to connect.

Eliza-GPT exports an API that is fairly similar to the OpenAI Chat Completions endpoint. To use it, you need to configure any application that is designed for OpenAI with the URL shown by the eliza-gpt command as a base URL, so that all requests are sent to this URL instead of to the OpenAI servers. And then you'll have a completely free and local chatbot that consumes negligible resources and yet returns moderately amusing responses.

If you need an example chat application to get you started, here is very simple one that uses the official Python client from OpenAI. Copy the following code to a file named chat.py.

from openai import OpenAI

openai_client = OpenAI(base_url='http://127.0.0.1:5005/v1', api_key='x')
messages = [
    {"role": "system", "content": "You are a helpful assistant."},
]

while True:
    user_message = input("You: ")
    messages.append({"role": "user", "content": user_message})

    completion = openai_client.chat.completions.create(
        model="gpt-3.5-turbo",
        messages=messages,
    )
    response = completion.choices[0].message.content
    print('Eliza:', response)

    messages.append({"role": "assistant", "content": response})

To run this application, first install the OpenAI client for Python:

$ pip install openai

Now run the chat client with:

$ python chat.py

You can now chat with ELIZA. And if you remove the base_url argument and set an OpenAI API key, the same application works with OpenAI.

A few things to note:

  • Eliza-GPT runs without authentication by default, but the OpenAI Python client requires an API key to always be set. The example above sets 'x' as API key just to keep the OpenAI client library happy. The eliza-gpt command has an option to add an API key, if you prefer to use authentication.
  • The model argument is required by OpenAI. Eliza-GPT does not care what model is requested, so you can use any model, or actually any string that you like.
  • Eliza-GPT is extremely fast when compared to a real LLM. With default options, a random delay is introduced in each request to make it look more like a real interaction with a model. There is an option to change the delay, or to remove it altogether, which can be useful if you want to use this service in unit tests.
  • The seed argument supported by OpenAI can be used to make Eliza-GPT be deterministic in its responses. This is another little helper to make unit tests more robust and reliable.
  • The chat.py example shown above retrieves the chatbot's response as a unit. OpenAI (and Eliza-GPT) also support streaming the response, which is nice because it allows your application to show chunks of text as they are being produced. The GitHub repository has a more elaborate client example that shows how to work with streamed responses.
  • Do you work with Langchain? The GitHub repository has an example for it as well.

I have tried to make Eliza-GPT's endpoint as close as possible to the real chat endpoint from OpenAI. The GitHub project has some information about the internals of the implementation and what parts of the OpenAI API are and are not supported. If you give Eliza-GPT a try and find that anything does not work, feel free to write an issue on the GitHub repository and I'll look into it.

Become a Patron!

Hello, and thank you for visiting my blog! If you enjoyed this article, please consider supporting my work on this blog on Patreon!

No comments yet

Leave a Comment