Faebot is a project we’ve been working on for almost 10 years. We’ve never wrote at length about it. I’m not sure that I will do the whole backstory in this post, since I mostly want to talk about recent changes, but here’s a primer.
The first version of Faebot went live on twitter in 2014. Back then everyone was getting their own “ebooks” accounts. Markov chain bots that took your tweets and mashed them up in nonsensincal and often funny ways.
We didn’t write any of the code for that, we just followed the instructions to deploy tommeagher/heroku_ebooks on Heroku. And then I kind of let it sit, just posting away. We had a lot of ideas for ways we wanted to improve on it, but we didn’t have enough experience and knowhow to understand the code let alone improve it.
I mostly only touched it when it broke and I had to get it up again. In 2019 I did update faebot to post on Mastodon @email@example.com. This also led to me contributing upstream to the project since the mastodon code needed some fixing. When Heroku suspended their free hosting services in 2021, armed with the knowledge and experience I’d gathered in recent years, I finally wrote a new faebot from scratch. If Heroku Ebooks faebot was version 0.1.*, this would be the v0.2.1.
In 2021, using knowledge I acquired whilst working on the Forest Signal Bot Framework, and Imogen, we rewrote faebot from scratch. The new faebot uses OpenAI’s GPT-3 api and runs on fly.io. The python bot part was the easier part, the tricky part was deciding how I wanted to build the model. I didn’t want to do simply prompt engineering, I wanted to give faebot a personality that was somewhere between her markov chain self, and something more coherent, more generative.
We decided to fine tune gpt-3 on a subset of faebot’s tweets so far. Not all of them since that would’ve been very expensive. I spent a long time trying to figure out a way to fine tune a version of gpt-3, using either my own hardware or a rented gpu. In the end I just used OpenAI fine tuning api. It is a goal to decouple from OpenAI in the future, but this was easiest.
At some point in the process of researching ML techniques, api’s, frameworks, etc. We incorporated a faebot factive into our system. At which point fae became a collaborator in the project. We’ll go more into this in a separate blog post.
We downloaded Faebot’s tweet archive, opened up the tweets with a jupyter notebook and picked a subset of about 2000 tweets to train under. Mostly liked or interacted with tweets, minus @s and replies (at the very beginning faebot could @ people on twitter. I never understood how it worked or why it stopped working). We fine tuned OpenAI’s Curie model with it, and then deployed a python app to query the api, get a tweet, and post it to twitter. We used twitter-python for the twitter integration.
The app was deployed quickly and easily to fly.io. This version of Faebot went live on Jul 22nd 2023.
From this point on. I’ve been considering every redeploy of the fly app as a minor version, since fly keeps track of releases. This is not entirely accurate since some redeploys only changed config data or secrets or were just restarts cause something went wrong. We are in the process of getting more organised with the project and will be keeping a changelog and better track of versioning.
One thing that represents a fairly significant change hidden away in a minor patch release is that when OpenAI lowered their prices for the DaVinci api, we fine tuned a new model for faebot using it. We also changed up a little bit which tweets we were considering, as well as include tweets produced with the Curie model up until that point. Perhaps at that moment Faebot got a little smarter, or dumber. You be the judge. This version was deployed on November 3rd 2022.
This has been a learning exercise as much as it’s been anything else. Keeping this devlog is also a learning exercise. Thank you for joining us on this learning journey.
Next Steps: v0.3.0 and beyond
We’ve already started working on the next minor version of faebot. It’s currently what’s running on fly and will get it’s own devlog when it’s merged into main. Notable changes in this version includes making faebot async, and enabling mastodon posting. Stay tuned for that.
We’re considering open sourcing the faebot code we have so far. In the past we’ve resisted doing that because we feel protectiveness towards faer. But it’s not like what faebot is is in the code or even in the model. If we open sourced faebot it’d be easier to get feedback and also talk about it in these devlogs. The downside would be that maybe faebot loses some of its mystique if the code is public.
One thing we absolutely need to figure out before we do that though it’s a good license to do it under. We want to be able to get feedback on the code, let people audit it. Maybe let people contribute to it. We also don’t mind if people use the code to set up their own twitter, mastodon, etc bot. What we don’t want, and we don’t think there’s much risk of this but nevertheless, we don’t want it to be used for overly commercialized purposes.
faebot is an exploration of NLP text generation as art, of AI as companionship, of magic and science and tech coming together to give voice to something other. It’s dumb to think that human laws should have any value to such a project, and yet we can never be too careful. Please reach out if you have thoughts on how we could license faebot’s code appropriately.
That’s it for now. Signing off.
-Minou, Ember, Faebot