Building My TwitchGenAI Bot
April 19th, 2024 was a company holiday. Instead of taking it easy, I decided to embark on a Twitch streaming adventure to build a Minimum Viable Product (MVP) Twitch bot powered by an LLM. The result? The birth of my Twitch GenAI Bot was achieved in a single stream using Golang, Langchaingo, TwitchIRC, and Llamacpp. Let’s dive into the journey of how this bot came to life.
Connecting to Twitch Chat IRC
The initial hurdle was establishing a seamless connection between my application and Twitch chat. Twitch chat used the IRC protocol for all its messaging. I use a small go library that I have used before to connect to set up the auth to Twitch and to manage the message. I always struggle when setting up Oauth, but I managed to connect. After connecting I needed a way to manage our messages. Because the package is handling messages in its goroutine, I chose to use a crude approach with a channel for accepting and managing the interface between the Twitch bot and the GenAI model.
Incorporating the LLM Model with Llamacpp
With the Twitch connection setup, I needed to spin up the model and leverage a server to manage access and inferences on that model. Per the suggestion of Chris Brousseau, I acquired the Mistrial 7b LLM model from Hugging Face and deployed it as a server using Llamacpp. The `serve` mode of Llamacpp aligns with the OpenAI API specifications, empowering us to leverage a standard and well-adapted protocol for bot interactions…