Have you ever tried to find a tweet you liked some time ago? Me too, and it’s almost impossible. Scrolling down in the ‘Likes’ tab of my profile while doing CMD-F is a pain and it doesn’t even work sometimes.
I came up with a way of saving all my past and future Twitter likes. It lets me browse, filter them, and search for tweets by text or user. And it’s free.
I thought it could be helpful for others, so here it goes
Before you start
We’ll to use Tinybird to store and query the tweets. It lets you ingest CSVs of up to billions of rows and create dynamic API endpoints on your data very easily.
Last, go here to request your full Twitter data. It will take 24h but this will ensure that you’re downloading all your tweets. Read more about this in the last section of the post
Downloading your most recent likes
Run these commands to set everything up
# clone the git repo
git clone https://github.com/xoelop/get-twitter-likes# set the working directory to one containing the code you've cloned
cd get-twitter-likes# activates the virtual environment
pipenv shell# create a file name .env to store your secrets
Now go to the
.env file you’ve created and fill it with your Twitter credentials and your Tinybird admin token
And finally, run these 2 commands download and process your ~14000 most recent likes
# create a data source on Tinybird to store your likes
sh scripts/create_twitter_likes_datasource.sh# download and upload your likes to Tinybird
python get_upload_latest_likes.py -pu -c 70
The number after
-c controls the number of API calls made to the
favorites/list Twitter API endpoint. You’ll get rate-limited if you make more than 75 calls in a 15-min window, so don’t set it higher than that. Make one call with a low number like 2 or 3 to check that everything’s alright first.
And by adding the parameter
-pu, the URLs of the tweets that have any will be parsed to get their title and description so that you can make searches on those fields too.
You’ll get about 200 likes for each call to that endpoint, so you can get up to ~15K likes this way. If you have more than that, jump to the last section.
After running it, you’ll have a CSV file with all the details for all your liked tweets in
And a new Data Source will also have been created in your Tinybird account, named
twitter_likes and the CSV will have been imported to it automatically. Go check it out in your dashboard.
Querying your likes with Tinybird
To query data from a Data Source, click on the “Create Pipe” button from the picture above, and name it
A Pipe is like a notebook where can run SQL queries to explore your data and create dynamic endpoints on it. Pipes have as many nodes as you want, and each node contains a SQL SELECT query. You can access the results of a node from other nodes, with its name.
Tinybird lets you query Data Sources or the output from pipes and nodes manually, like this:
But querying out likes this way wouldn’t be very convenient, as the results are hard to visualize within Tinybird itself. Thankfully, it also lets us define dynamic endpoints so that we can process the data inside it and visualize the results somewhere else.
Then, add a new node where all the filtering will be done, and we’ll define an endpoint with some dynamic parameters. Name it
filter_likes and copy all this SQL code into it. You can use most ClickHouse functions available
And now add the last node of the pipe, called
results, where you’ll select what data is returned by your endpoint, containing this code
Finally, to create an endpoint, go to the top right corner of your window and click on “Create API Endpoint”, and then click on “results”.
And just by changing the
format part of the URL, you can expose your data also as a CSV file
Searching your likes from Google Sheets
This is the final part. Here, you’ll get a spreadsheet like the one you saw in the first GIF of the post, that will let you make searches on your likes and visualize the result.
To do it:
- Make a copy of this spreadsheet
- Go to the Settings sheet and past the read token you can get from the documentation page of your endpoint, where you were before. It’s the part of the URL that comes after
tokenin the last gif
And you can search your likes with several filters, back in the first sheet. The green cells are the only ones you should need to edit.
Some noteworthy things:
- The most recent likes are returned first
search patterncell can contain an exact phrase like
react nativeand also a re2 pattern like
userscell can contain a part of a user handle or name, and also a list of them. So valid values could be
naval, but also
naval,shl, paul grah
- The format of
limitshould be an integer
- And the q parameter lets you run a query on the results of your endpoint. Read more here. You can use _ as a placeholder for the name of the pipe (
select * from _ where …)
Uploading your new likes every day
python get_upload_latest_likes.py -pu will download your last 200 likes and upload them to Tinybird. You can call it locally, set up a cron job or deploy the project on whatever PaaS you want.
You can do it for free on Heroku running these commands:
heroku create twitter-likes-YOURNAME # the project name has to be unique, so change YOURNAME by something else
git push heroku master # to deploy the code to Heroku
python heroku-config.py # to upload your secrets from .env
Then add the Heroku Scheduler, and in it add a job that runs
python get_upload_latest_likes.py -pu daily.
Bonus: downloading all your likes
If you have more than 15k likes, you should use this method to get them all, as this lets you download 6x more tweets without being rate-limited.
get_upload_latest_likes.py , it’s possible that you miss some likes. To make sure you get them all, go here to ask Twitter to let you download all the data they have about you. After 24h, you’ll get an email with a link to a page like this
Click on “Download archive” and you’ll get a
.zip with a bunch of
.js files. The only one you’ll need is
like.js file into the folder
data where you’ve cloned the repo.
And then, run
python get_likes_from_json_to_csv.py -pu -o data/likes1.csv
Let me know if something isn’t clear or you need help setting it up. And if you found this interesting, follow me on Twitter!