Scweet is a scalable tweet-scraping tool built on the open-source Scweet library. Just specify dates, keywords, hashtags, and tweet count—the Actor automatically scales to fetch data at up to 1000 tweets per minute only $0.30 per 1000 tweets. All results come in JSON/CSV format.
Scweet on Apify builds upon the original Scweet library to enable large-scale tweet scraping from X (formerly Twitter) in a cloud environment. With minimal setup and flexible configuration, you can easily collect vast amounts of tweet data for research, analytics, journalism, and more.
This Actor is intended for lawful and ethical use only. Please ensure you comply with X's terms of service when using this tool.
Customize your tweet search using the following parameters. All fields are optional, and defaults will apply if omitted.
Field | Type | Default | Description |
---|---|---|---|
words_and | list[string] | [] (empty) | Terms that must appear in the tweet. |
words_or | list[string] | [] (empty) | At least one term must appear in the tweet. |
hashtag | list[string] | [] (empty) | One or more hashtags to search for. |
from_user | string | None | Scrape tweets from a specific user. |
to_user | string | None | Scrape tweets replying to a specific user. |
min_likes | string | None | Minimum likes required for a tweet. |
min_replies | string | None | Minimum replies required for a tweet. |
min_retweets | string | None | Minimum retweets required for a tweet. |
lang | string | None | Restrict tweets to a specific language (e.g., "en"). |
since | string (YYYY-MM-DD) | 2 years ago | Start date of the search window. |
until | string (YYYY-MM-DD) | Today’s date | End date of the search window. |
type | string | "Top" | Choose "Top" (popular tweets) or "Latest" (real-time tweets). |
maxItems | number | 1000 | Maximum number of tweets to scrape. |
geocode | string | None | Geolocation search (e.g., "39.8283,-98.5795,2500km"). |
place | string | None | Twitter Place ID for more precise location-based search. |
near | string | None | Name of a city or location to narrow the search. Use with within for accuracy. |
Location Data Limitations: Only about 1–2% of all X tweets include geolocation data. Many users also provide fictional or playful locations (e.g., "Laugh Tale"). Therefore, location-based searches might yield incomplete results.
Improving Accuracy: If you need better location accuracy, use the place
parameter (Twitter Place ID). This will yield far more precise results than geocode.
Using the near
Parameter: If you use the near
field, we recommend adding a within
radius (e.g., "within:10km") to increase search accuracy.
Scraping for Specific Users: If you want to scrape tweets from a specific user or tweets replying to a particular user, use the from_user
and to_user
parameters.
from_user: "exampleuser"
will filter tweets sent by this user.to_user: "exampleuser"
will capture tweets replying to this user.Note: Scraping a specific profile (e.g., https://x.com/handle) is equivalent to using the from_user
parameter with the profile’s handle.
To protect internal resources from abuse and ensure fair usage, the Actor implements rate limiting:
Free Plan: Users are limited to initiating a new run only every few seconds. Each account session has a daily request cap (typically 30 requests).
Run Data: The Actor saves minimal user-run data (such as timestamps for rate limiting) to enforce usage limits. This data is stored internally and is not shared with third parties.
Standard Speed: Under typical conditions, Scweet on Apify can scrape over 1,000 tweets per minute.
Enhanced Performance: Paying users benefit from higher resource allocation, allowing for faster scraping and larger tweet volumes. The performance boost depends on the maxItems
setting and date range.
The Actor stores the results in Apify’s dataset. You can download your results in JSON, CSV, or XLSX format.
1[ 2 { 3 "id": "tweet-1877796743036743891", 4 "user_is_blue_verified": true, 5 "user_created_at": "Tue Jun 02 20:12:29 +0000 2009", 6 "user_description": "", 7 "user_urls": [], 8 "user_favourites_count": 113767, 9 "user_followers_count": 212302178, 10 "user_friends_count": 931, 11 "user_location": "", 12 "user_media_count": 3086, 13 "user_handle": "elonmusk", 14 "user_profile_image_url_https": "...", 15 "tweet_source": "<a href=\"http://twitter.com/download/iphone\" ...>", 16 "tweet_created_at": "Fri Jan 10 19:16:45 +0000 2025", 17 "tweet_mentions": [], 18 "tweet_url": "https://x.com/elonmusk/status/1877796743036743891", 19 "tweet_view_count": "28738465", 20 "tweet_text": "Tyrannical behavior", 21 "tweet_hashtags": [], 22 "tweet_favorite_count": 218062, 23 "tweet_quote_count": 1518, 24 "tweet_reply_count": 10558, 25 "tweet_retweet_count": 51030, 26 "tweet_lang": "en", 27 "tweet_media_urls": [], 28 "tweet_media_expanded_urls": [] 29 } 30]
Scweet on Apify is constantly evolving. We welcome feedback from researchers, data scientists, journalists, and casual users. Let us know how you use this tool and any improvements you'd like to see.
Scweet on Apify only stores minimal run-related user data for the sole purpose of rate limiting and preventing abuse. This data is used internally and is not shared with third parties.
Yes, if you're scraping publicly available data for personal or internal use. Always review Websute's Terms of Service before large-scale use or redistribution.
No. This is a no-code tool — just enter a job title, location, and run the scraper directly from your dashboard or Apify actor page.
It extracts job titles, companies, salaries (if available), descriptions, locations, and post dates. You can export all of it to Excel or JSON.
Yes, you can scrape multiple pages and refine by job title, location, keyword, or more depending on the input settings you use.
You can use the Try Now button on this page to go to the scraper. You’ll be guided to input a search term and get structured results. No setup needed!