Medium Following Scraper

Medium Following Scraper

Extract detailed information about Medium users' following lists. Get comprehensive data including usernames, bios, membership status, and more. Perfect for influencer research, network analysis, and understanding content creator communities. 🔍

SOCIAL_MEDIAApify

Medium Following Scraper 👥

🤖 What does Medium Following Scraper do?

This actor allows you to scrape following lists from Medium users. For any given Medium username, it extracts detailed information about the people they follow, including names, bios, profile URLs, and membership status.

✨ Features

  • 🔍 Scrape following lists from any public Medium profile
  • 👤 Get detailed user information including:
    • User ID
    • Name
    • Username
    • Bio
    • Profile URL
    • Profile Image
    • Membership Tier
    • Book Author Status
  • ⚡ High-performance scraping with built-in request management
  • ⚙️ Configurable maximum items limit

🔧 Input Configuration

The actor accepts the following input parameters:

  • usernames (Array): List of Medium usernames to scrape following lists
  • maxItems (Number): Maximum number of articles to scrape

📝 Use Cases

  • 🎯 Influencer research and analysis
  • 📊 Content creator network mapping
  • 🔍 Audience analysis
  • 💡 Finding potential collaborators
  • 📈 Community growth analysis

💡 Tips

  • Start with a small maxItems value for testing
  • Use multiple usernames to batch process multiple profiles

Input Example

A full explanation of an input example in JSON.

1{
2	"usernames": [
3        "mariaspantidi"
4    ],
5    "maxItems": 30
6}

Output sample

The results will be wrapped into a dataset which you can always find in the Storage tab. Here's an excerpt from the data you'd get if you apply the input parameters above:

And here is the same data but in JSON. You can choose in which format to download your data: JSON, JSONL, Excel spreadsheet, HTML table, CSV, or XML.

1[
2	{
3		"id": "6356e70393da",
4		"name": "CarolF",
5		"username": "carol.finch1",
6		"bio": "I write diverse stuff in British English. I use the S over the Z and keep the Oxford comma for special occasions. Editor of The Parenting Portal.",
7		"profileUrl": "https://medium.com/@carol.finch1",
8		"imageUrl": "https://miro.medium.com/v2/resize:fill:64:64/1*Ffq1D1HG8aa3MDQB6JhjnQ.jpeg",
9		"membershipTier": "FRIEND",
10		"isBookAuthor": false
11	},
12	{
13		"id": "cc2192bf0518",
14		"name": "Emily J. Smith",
15		"username": "emjsmith",
16		"bio": "Writer and tech professional. My debut novel, NOTHING SERIOUS, is out Feb '25 from William Morrow / HarperCollins (more at emjsmith.com).",
17		"profileUrl": "https://medium.com/@emjsmith",
18		"imageUrl": "https://miro.medium.com/v2/resize:fill:64:64/1*N-9MfC5BB-lPPU197Yye8g.jpeg",
19		"membershipTier": "MEMBER",
20		"isBookAuthor": false
21	},
22    ...
23]

Frequently Asked Questions

Is it legal to scrape job listings or public data?

Yes, if you're scraping publicly available data for personal or internal use. Always review Websute's Terms of Service before large-scale use or redistribution.

Do I need to code to use this scraper?

No. This is a no-code tool — just enter a job title, location, and run the scraper directly from your dashboard or Apify actor page.

What data does it extract?

It extracts job titles, companies, salaries (if available), descriptions, locations, and post dates. You can export all of it to Excel or JSON.

Can I scrape multiple pages or filter by location?

Yes, you can scrape multiple pages and refine by job title, location, keyword, or more depending on the input settings you use.

How do I get started?

You can use the Try Now button on this page to go to the scraper. You’ll be guided to input a search term and get structured results. No setup needed!