Scweet

Scweet

Scweet is a scalable tweet-scraping tool built on the open-source Scweet library. Just specify dates, keywords, hashtags, and tweet count—the Actor automatically scales to fetch data at up to 1000 tweets per minute only $0.30 per 1000 tweets. All results come in JSON/CSV format.

SOCIAL_MEDIALEAD_GENERATIONApify

Scweet on Apify 🌐📊

Scweet on Apify builds upon the original Scweet library to enable large-scale tweet scraping from X (formerly Twitter) in a cloud environment. With minimal setup and flexible configuration, you can easily collect vast amounts of tweet data for research, analytics, journalism, and more.

🚨 Responsible Usage

This Actor is intended for lawful and ethical use only. Please ensure you comply with X's terms of service when using this tool.


🛠️ Quick Guide

  1. Open the Actor on Apify – Start by opening the Actor on your Apify console.
  2. Set Input Parameters – Define your parameters, such as keywords, hashtags, date range, and optionally location or user filters.
  3. Run the Actor – Initiate the scraping process.
  4. Monitor Progress – Keep an eye on high-level messages during the run.
  5. Retrieve Data – Once the run completes, access the tweet data from the Apify dataset.

⚙️ Detailed Usage

3.1 Configuration & Input Parameters

Customize your tweet search using the following parameters. All fields are optional, and defaults will apply if omitted.

FieldTypeDefaultDescription
words_andlist[string][] (empty)Terms that must appear in the tweet.
words_orlist[string][] (empty)At least one term must appear in the tweet.
hashtaglist[string][] (empty)One or more hashtags to search for.
from_userstringNoneScrape tweets from a specific user.
to_userstringNoneScrape tweets replying to a specific user.
min_likesstringNoneMinimum likes required for a tweet.
min_repliesstringNoneMinimum replies required for a tweet.
min_retweetsstringNoneMinimum retweets required for a tweet.
langstringNoneRestrict tweets to a specific language (e.g., "en").
sincestring (YYYY-MM-DD)2 years agoStart date of the search window.
untilstring (YYYY-MM-DD)Today’s dateEnd date of the search window.
typestring"Top"Choose "Top" (popular tweets) or "Latest" (real-time tweets).
maxItemsnumber1000Maximum number of tweets to scrape.
geocodestringNoneGeolocation search (e.g., "39.8283,-98.5795,2500km").
placestringNoneTwitter Place ID for more precise location-based search.
nearstringNoneName of a city or location to narrow the search. Use with within for accuracy.

3.2 Location Considerations 🌍

  • Location Data Limitations: Only about 1–2% of all X tweets include geolocation data. Many users also provide fictional or playful locations (e.g., "Laugh Tale"). Therefore, location-based searches might yield incomplete results.

  • Improving Accuracy: If you need better location accuracy, use the place parameter (Twitter Place ID). This will yield far more precise results than geocode.

  • Using the near Parameter: If you use the near field, we recommend adding a within radius (e.g., "within:10km") to increase search accuracy.

3.3 User Filters 🧑‍💻

  • Scraping for Specific Users: If you want to scrape tweets from a specific user or tweets replying to a particular user, use the from_user and to_user parameters.

    • Example: from_user: "exampleuser" will filter tweets sent by this user.
    • Similarly, to_user: "exampleuser" will capture tweets replying to this user.

    Note: Scraping a specific profile (e.g., https://x.com/handle) is equivalent to using the from_user parameter with the profile’s handle.

3.4 Usage Limits & Rate Limiting ⏱️

To protect internal resources from abuse and ensure fair usage, the Actor implements rate limiting:

  • Free Plan: Users are limited to initiating a new run only every few seconds. Each account session has a daily request cap (typically 30 requests).

  • Run Data: The Actor saves minimal user-run data (such as timestamps for rate limiting) to enforce usage limits. This data is stored internally and is not shared with third parties.

3.5 Speed & Performance ⚡

  • Standard Speed: Under typical conditions, Scweet on Apify can scrape over 1,000 tweets per minute.

  • Enhanced Performance: Paying users benefit from higher resource allocation, allowing for faster scraping and larger tweet volumes. The performance boost depends on the maxItems setting and date range.


📥 Output Format

The Actor stores the results in Apify’s dataset. You can download your results in JSON, CSV, or XLSX format.

Example JSON output:

1[
2  {
3    "id": "tweet-1877796743036743891",
4    "user_is_blue_verified": true,
5    "user_created_at": "Tue Jun 02 20:12:29 +0000 2009",
6    "user_description": "",
7    "user_urls": [],
8    "user_favourites_count": 113767,
9    "user_followers_count": 212302178,
10    "user_friends_count": 931,
11    "user_location": "",
12    "user_media_count": 3086,
13    "user_handle": "elonmusk",
14    "user_profile_image_url_https": "...",
15    "tweet_source": "<a href=\"http://twitter.com/download/iphone\" ...>",
16    "tweet_created_at": "Fri Jan 10 19:16:45 +0000 2025",
17    "tweet_mentions": [],
18    "tweet_url": "https://x.com/elonmusk/status/1877796743036743891",
19    "tweet_view_count": "28738465",
20    "tweet_text": "Tyrannical behavior",
21    "tweet_hashtags": [],
22    "tweet_favorite_count": 218062,
23    "tweet_quote_count": 1518,
24    "tweet_reply_count": 10558,
25    "tweet_retweet_count": 51030,
26    "tweet_lang": "en",
27    "tweet_media_urls": [],
28    "tweet_media_expanded_urls": []
29  } 
30]

🛠️ Support & Future Growth

Scweet on Apify is constantly evolving. We welcome feedback from researchers, data scientists, journalists, and casual users. Let us know how you use this tool and any improvements you'd like to see.


⚠️ Disclaimer

Scweet on Apify only stores minimal run-related user data for the sole purpose of rate limiting and preventing abuse. This data is used internally and is not shared with third parties.

Frequently Asked Questions

Is it legal to scrape job listings or public data?

Yes, if you're scraping publicly available data for personal or internal use. Always review Websute's Terms of Service before large-scale use or redistribution.

Do I need to code to use this scraper?

No. This is a no-code tool — just enter a job title, location, and run the scraper directly from your dashboard or Apify actor page.

What data does it extract?

It extracts job titles, companies, salaries (if available), descriptions, locations, and post dates. You can export all of it to Excel or JSON.

Can I scrape multiple pages or filter by location?

Yes, you can scrape multiple pages and refine by job title, location, keyword, or more depending on the input settings you use.

How do I get started?

You can use the Try Now button on this page to go to the scraper. You’ll be guided to input a search term and get structured results. No setup needed!