Reddit Trends Scraper ๐Ÿ”

Reddit Trends Scraper ๐Ÿ”

Extract trending posts and discussions from Reddit with advanced filtering capabilities. Scrapes post details, engagement metrics, and user information while handling dynamic content loading.

SOCIAL_MEDIAINTEGRATIONSOTHERApify

๐Ÿ“‹ Overview

Reddit Trends Scraper is a powerful tool that helps you extract trending content and discussions from Reddit. It automatically scrolls through Reddit pages and captures comprehensive data about posts, including engagement metrics, author information, and community details.

โœจ Features

  • ๐Ÿ”„ Handles dynamic content loading through infinite scrolling
  • ๐Ÿ“Š Captures detailed post metrics (upvotes, comments)
  • ๐Ÿ‘ฅ Extracts user and community information
  • โšก Supports proxy configuration for reliable scraping
  • ๐ŸŽฏ Customizable maximum items limit
  • ๐Ÿ•’ Includes post timestamp and engagement data
  • ๐Ÿ”— Provides direct links to posts, subreddits, and user profiles

๐Ÿ“Š Output Data Structure

The actor extracts the following data for each Reddit post:

  • title - Post title
  • postUrl - Direct link to the post
  • upvotes - Number of upvotes
  • comments - Number of comments
  • subreddit - Community name
  • subredditUrl - Link to the subreddit
  • subredditType - Type of post (link, image, text, etc.)
  • author - Username of the poster
  • authorProfile - Link to author's profile
  • postTime - Timestamp of the post

๐Ÿ’ก Use Cases

  • ๐Ÿ“ˆ Track trending topics and discussions
  • ๐ŸŽฏ Monitor specific subreddits for content
  • ๐Ÿ“Š Analyze engagement patterns
  • ๐Ÿ” Research community interests
  • ๐Ÿ“ฑ Social media monitoring
  • ๐ŸŽจ Content inspiration and curation

๐Ÿ’ช Benefits

  • Automated data collection from Reddit
  • Real-time trend monitoring
  • Comprehensive post analytics
  • Clean, structured data output
  • Reliable performance with proxy support

โš™๏ธ Setup & Usage

  1. Set your desired maximum number of items to scrape
  2. Configure proxy settings (optional)
  3. Run the actor and collect your data

๐Ÿ“ Input Parameters

  • maxItems - Maximum number of posts to scrape (default: 100)
  • proxyConfiguration - Optional proxy settings for reliable scraping

Output sample

The results will be wrapped into a dataset which you can always find in theย Storageย tab. Here's an excerpt from the data you'd get if you apply the input parameters above:

And here is the same data but in JSON. You can choose in which format to download your data: JSON, JSONL, Excel spreadsheet, HTML table, CSV, or XML.

1[
2    {
3        "title": "Democratic Rep. Al Green removed after disrupting Trump's speech",
4        "postUrl": "https://www.reddit.com/r/politics/comments/1j3sv5s/democratic_rep_al_green_removed_after_disrupting/",
5        "upvotes": 20799,
6        "comments": 1270,
7        "subreddit": "r/politics",
8        "subredditUrl": "https://www.reddit.com/r/politics/",
9        "subredditType": "link",
10        "author": "nbcnews",
11        "authorProfile": "https://www.reddit.com/user/nbcnews",
12        "postTime": "2025-03-05 02:38:36"
13    },
14    ...
15]

Frequently Asked Questions

Is it legal to scrape job listings or public data?

Yes, if you're scraping publicly available data for personal or internal use. Always review Websute's Terms of Service before large-scale use or redistribution.

Do I need to code to use this scraper?

No. This is a no-code tool โ€” just enter a job title, location, and run the scraper directly from your dashboard or Apify actor page.

What data does it extract?

It extracts job titles, companies, salaries (if available), descriptions, locations, and post dates. You can export all of it to Excel or JSON.

Can I scrape multiple pages or filter by location?

Yes, you can scrape multiple pages and refine by job title, location, keyword, or more depending on the input settings you use.

How do I get started?

You can use the Try Now button on this page to go to the scraper. Youโ€™ll be guided to input a search term and get structured results. No setup needed!