Medium Posts Search Scraper

Medium Posts Search Scraper

A powerful scraper that extracts comprehensive article data from Medium's search results. Get detailed information about articles, authors, and engagement metrics. Perfect for content research, trend analysis, and tracking popular writers and publications. 🔍📊

SOCIAL_MEDIAApify

Medium Posts Search Scraper 🔍

🤖 What does Medium Posts Search Scraper do?

This actor allows you to scrape Medium articles based on search keywords. It extracts comprehensive information about articles, including titles, authors, engagement metrics, and publication details.

✨ Features

  • 🔎 Search Medium articles by keywords
  • 📊 Extract detailed article metadata
  • 👥 Get author information
  • 📈 Collect engagement metrics (claps, responses)
  • 📚 Publication/Collection details
  • ⏱️ Reading time estimation

🗃️ Output Dataset

The actor stores results in a structured data format including:

  • Article details (ID, title, subtitle, URL)
  • Publication dates
  • Engagement metrics (claps, responses)
  • Author information (name, username, bio)
  • Collection/Publication details
  • Preview image URLs
  • Content accessibility status

💡 Use Cases

  • Content research and analysis
  • Writer/Publication tracking
  • Engagement metrics collection
  • Topic trend analysis
  • Content curation
  • SEO research
  • Competitor analysis

🛠️ Input Parameters

  • keywords (Array): List of search keywords to scrape
  • maxItems (Number): Maximum number of articles to scrape

📝 Tips for Optimal Results

  • Use specific keywords for more targeted results
  • Adjust maxItems based on your needs

Input Example

A full explanation of an input example in JSON.

1{
2    "keywords": [
3        "write"
4    ],
5    "maxItems": 30
6}

Output sample

The results will be wrapped into a dataset which you can always find in the Storage tab. Here's an excerpt from the data you'd get if you apply the input parameters above:

And here is the same data but in JSON. You can choose in which format to download your data: JSON, JSONL, Excel spreadsheet, HTML table, CSV, or XML.

1[
2	{
3		"id": "5c510f575964",
4		"title": "What Does It Mean to Write Women’s Fiction?",
5		"subtitle": "A female writer’s musings on the challenges of an imposed niche",
6		"url": "https://medium.com/wilder-with-yael-wolfe/what-does-it-mean-to-write-womens-fiction-5c510f575964",
7		"readingTime": 9,
8		"isLocked": true,
9		"responseCount": 49,
10		"clapCount": 2515,
11		"visibility": "LOCKED",
12		"isSeries": false,
13		"firstPublishedAt": "2024-11-03T16:44:42.880Z",
14		"latestPublishedAt": "2024-11-03T16:44:42.880Z",
15		"previewImage": "https://miro.medium.com/v2/resize:fill:320:214/1*X1YZ5fw1y51ytJvBArdhMA.jpeg",
16		"isPublished": true,
17		"isLimitedState": false,
18		"creator": {
19			"id": "d02ca71a13d6",
20			"name": "Y.L. Wolfe",
21			"username": "yaelwolfe",
22			"bio": "Adventuring & nesting in middle age. Welcome to my second act. | Newsletter: http://eepurl.com/gleDcD | Email: hello@ylwolfe.com"
23		},
24		"collection": {
25			"name": "Wilder",
26			"subscriberCount": 675,
27			"description": "We will not be tamed."
28		}
29	},
30    ...
31]

Frequently Asked Questions

Is it legal to scrape job listings or public data?

Yes, if you're scraping publicly available data for personal or internal use. Always review Websute's Terms of Service before large-scale use or redistribution.

Do I need to code to use this scraper?

No. This is a no-code tool — just enter a job title, location, and run the scraper directly from your dashboard or Apify actor page.

What data does it extract?

It extracts job titles, companies, salaries (if available), descriptions, locations, and post dates. You can export all of it to Excel or JSON.

Can I scrape multiple pages or filter by location?

Yes, you can scrape multiple pages and refine by job title, location, keyword, or more depending on the input settings you use.

How do I get started?

You can use the Try Now button on this page to go to the scraper. You’ll be guided to input a search term and get structured results. No setup needed!