Fastest way to integrate GPT with Slack using GCP Cloud Run

Masato Naka
7 min readDec 30, 2023

--

Overview

Although it’s not new, I believe it’s still valuable to share how to integrate GPT with Slack with step-by-step guide.

Architecture

Cloud Run + Slack App + OpenAI API

Steps

1. Create a Slack app

Go to https://api.slack.com/apps/ and create a Slack app.

We can choose From scratch and decide the app name.

The next thing we need to do is to grant the necessary permission to the bot. Click Oauth & Permissions at the left sidebar.

And go to the Scopes section and select the necessary scopes:

  1. app_mentions:read: to get the event of mentioning this app. we will use this event as a trigger.
  2. chat:write: to send a message to Slack
  3. reactions:write: to add a reaction to the message mentioning the Slack app.

You can add more scopes based on your requirements.

Now we’ll set up event subscriptions. Click Event Subscriptions on the left sidebar and enable the events.

Here we need to prepare a request URL. we’ll come back here later after deploying our simple bot application in the next section.

In this post, we’re using event subscription with a request url but if you use Socket Mode, you don’t need any request endpoint. Socket Mode is more convenient in local development as you can check the behavior by running your app in your local, while the event subscription with an http request endpoint requires to deploy your app to somewhere with a request URL.

The reason for using an http endpoint here is because Socket Mode app needs to be always running, which would charge the server cost constantly. A Socket mode app is more appropriate and cost-effective for a large-scale organization in which the app is used very frequently and needs to be always responsive.

On the other hand with an http endpoint, we can minimize the server cost by utilizing Cloud Run autoscaling feature that only costs you while the application is actually used.

2. Create an app

Let’s create a simple Python app with Flask framework, which receives Slack events, call OpenAI api and return the response.

You can get the whole code here: https://github.com/nakamasato/slack-gpt

First of all we need to install dependent libraries.

pip install slack-sdk gunicorn flask openai langchain

If you clone the app, you can install them by either poetry install or pip install -r requirements.txt.

Now let’s take a look at main.py

import os

from flask import Flask, jsonify, request
from langchain.cache import InMemoryCache
from langchain.chat_models import ChatOpenAI
from langchain.globals import set_llm_cache
from langchain.schema import HumanMessage
from slack_sdk import WebClient
from slack_sdk.errors import SlackApiError
from slack_sdk.signature import SignatureVerifier

# Read from environment variables
SLACK_BOT_TOKEN = os.environ["SLACK_BOT_TOKEN"]
SIGNING_SECRET = os.environ["SIGNING_SECRET"]
DEDICATED_CHANNELS = os.getenv("DEDICATED_CHANNELS", "").split(
","
) # Also send to the channel in the dedicated channels
VERIFIER = SignatureVerifier(SIGNING_SECRET)
GPT_MODEL = os.getenv("GPT_MODEL", "gpt-3.5-turbo")

# Client
app = Flask(__name__)
set_llm_cache(InMemoryCache())
client = WebClient(token=SLACK_BOT_TOKEN)
chat = ChatOpenAI(
model=GPT_MODEL,
streaming=False,
verbose=True,
)


@app.route("/slack/events", methods=["POST"])
def slack_events():
# Check if the request from Slack
if not VERIFIER.is_valid_request(request.data, request.headers):
return "Invalid request", 403

# skip for slack retry
if request.headers.get("x-slack-retry-num"):
print("retry called")
return {"statusCode": 200}

# get event data
data = request.get_json()
event = data.get("event", {})

# for url_verification event
if data["type"] == "url_verification":
return jsonify({"challenge": data["challenge"]})

channel_id = event.get("channel")

# for app_mention event
if event.get("type") == "app_mention":
user_id = event.get("user")
text = event.get("text")
print(f"{channel_id=}, {user_id=}")

# Reply message
try:
response = client.reactions_add(
channel=channel_id,
timestamp=event["ts"],
name="speech_balloon",
)
print(response)
answer = ask_ai(text)
client.chat_postMessage(
channel=channel_id,
text=f"{answer}",
thread_ts=event["ts"], # Reply in thread
reply_broadcast=channel_id in DEDICATED_CHANNELS,
)
response = client.reactions_add(
channel=channel_id,
timestamp=event["ts"],
name="white_check_mark",
)
except SlackApiError:
response = client.reactions_add(
channel=channel_id,
timestamp=event["ts"],
name="man-bowing",
)
client.chat_postMessage(
channel=channel_id,
text="Sorry. Something's wrong.",
thread_ts=event["ts"], # Reply in thread
reply_broadcast=channel_id in DEDICATED_CHANNELS,
)

return jsonify({"success": True})


def ask_ai(text):
result = chat([HumanMessage(content=text)])
return result.content


if __name__ == "__main__":
app.run(debug=True, host="0.0.0.0", port=int(os.environ.get("PORT", 8080)))

We get the following information from the environment variables:

  1. SLACK_BOT_TOKEN(required): The token of the slack app created above.
  2. SIGNING_SECRET(required): The siging secret of the slack app created above.
  3. OPENAI_ORGANIZATION (required): Open AI organization
  4. OPENAI_API_KEY(required): Open AI api key
  5. GPT_MODEL (optional): GPT model name (default: gpt-3.5-turbo )
  6. DEDICATED_CHANNELS (optional): Slack channels in which the app will also send a reply message to the channel (not in the thread)

The main logic is after if event.get(“type”) == “app_mention": :

  1. Send a reaction 💬 to the mentioning message so the user can quickly know the message is being processed.
  2. Send the message to gpt in ask_ai function, which we can add more logic based on your purpose in the future.
  3. Lastly, return the message in the thread (or also post in the channel) and add ✅ reaction to the original message.
  4. If the process fails, send an error message to the thread and add 🙇 reaction to the original message.

3. Deploy the app to GCP Cloud Run

Cloud Run is a fully managed compute platform provided by Google Cloud. It allows developers to deploy and run containerized applications in a serverless environment.

If you don’t have a GCP account, please sign up and create a GCP project first. You also need gcloud command to deploy an app. Please follow the instruction of gcloud cli installation.

We set local environment variables to specify your gcp project, gcp region, and your service account name.

PROJECT=xxxx
REGION=asia-northeast1
SA_NAME=slack-gpt

Firstly we create a service account with the following command:

gcloud iam service-accounts create $SA_NAME --project $PROJECT

Next we store all the required credentials in SecretManager and grant the necessary permissions to the service account created above.

# slack bot token
gcloud secrets create slack-bot-token --replication-policy automatic --project $PROJECT
echo -n "xoxb-xxx" | gcloud secrets versions add slack-bot-token --data-file=- --project $PROJECT
gcloud secrets add-iam-policy-binding slack-bot-token \
--member="serviceAccount:${SA_NAME}@${PROJECT}.iam.gserviceaccount.com" \
--role="roles/secretmanager.secretAccessor" --project ${PROJECT}

# slack signing secret
gcloud secrets create slack-signing-secret --replication-policy automatic --project $PROJECT
echo -n "xxx" | gcloud secrets versions add slack-signing-secret --data-file=- --project $PROJECT
gcloud secrets add-iam-policy-binding slack-signing-secret \
--member="serviceAccount:${SA_NAME}@${PROJECT}.iam.gserviceaccount.com" \
--role="roles/secretmanager.secretAccessor" --project ${PROJECT}

# openai organization
gcloud secrets create openai-organization --replication-policy automatic --project $PROJECT
echo -n "xxx" | gcloud secrets versions add openai-organization --data-file=- --project $PROJECT
gcloud secrets add-iam-policy-binding openai-organization \
--member="serviceAccount:${SA_NAME}@${PROJECT}.iam.gserviceaccount.com" \
--role="roles/secretmanager.secretAccessor" --project ${PROJECT}

# openai api key
gcloud secrets create openai-api-key --replication-policy automatic --project $PROJECT
echo -n "xxx" | gcloud secrets versions add openai-api-key --data-file=- --project $PROJECT
gcloud secrets add-iam-policy-binding openai-api-key \
--member="serviceAccount:${SA_NAME}@${PROJECT}.iam.gserviceaccount.com" \
--role="roles/secretmanager.secretAccessor" --project ${PROJECT}

You can get the slack bot token from OAuth & Permissions after installing your app to your workspace.

You can get the signing secret from App Credentials in Basic Information.

Now we’re ready to build and deploy your app to Cloud Run with the following command:

cloud run deploy slack-gpt \
--source . \
--platform managed \
--region $REGION \
--allow-unauthenticated \
--service-account ${SA_NAME}@${PROJECT}.iam.gserviceaccount.com \
--set-secrets=SLACK_BOT_TOKEN=slack-bot-token:latest \
--set-secrets=SIGNING_SECRET=slack-signing-secret:latest \
--set-secrets=OPENAI_ORGANIZATION=openai-organization:latest \
--set-secrets=OPENAI_API_KEY=openai-api-key:latest \
--project ${PROJECT}

Behind the scenes, it’ll trigger docker image build with CloudBuild and deploy a Cloud Run service with the built image.

After finishing deployment, we can get the endpoint with the following command:

URL=$(gcloud run services describe slack-gpt --project $PROJECT --region ${REGION} --format json | jq -r .status.url)
echo $URL

4. Configure Slack event subscriptions

Let’s go back to Slack app configuration page and fill out the request url with the one obtained above.

Note that our application endpoint is /slack/events so we need to set ${URL}/slack/events as the request url.

After filling the url, it’ll be automatically verified.

5. Everything’s ready!

Everything is ready! Now it’s time to mention the app on Slack!

Have a happy GPT life on Slack!

--

--

Masato Naka

An SRE engineer, mainly working on Kubernetes. CKA (Feb 2021). His Interests include Cloud-Native application development, and machine learning.