GitHub Repository Scraper

GitHub Repository Scraper

This actor scrapes detailed information from GitHub repositories using reliable HTTP requests and HTML parsing. It extracts repository metadata including star counts, fork counts, topics/tags, license information, primary programming language, and last updated timestamps.

DEVELOPER_TOOLSAGENTSAUTOMATIONApify

GitHub Repository Scraper for Apify

A Python-based Apify actor that scrapes GitHub repository information using requests and BeautifulSoup.

Features

  • Extracts repository information including:
    • Full name (owner/repo)
    • Star count
    • Description
    • Primary programming language
    • Topics/tags
    • Last updated time
    • License information
    • Fork count
  • Written in Python using requests and BeautifulSoup for reliable scraping
  • Built for the Apify platform

Files

  • apify_actor.py - The main actor code for Apify deployment
  • requests_github_scraper.py - Standalone GitHub scraper (for local testing)
  • INPUT_SCHEMA.json - Input schema for the Apify actor
  • requirements.txt - Python dependencies
  • package.json - Actor metadata for Apify

Local Testing

  1. Install dependencies: pip install -r requirements.txt
  2. Run the local version: python requests_github_scraper.py
  3. Check results in the apify_storage directory

Deploying to Apify

Prerequisites

  1. Create an Apify account if you don't have one
  2. Install the Apify CLI: npm install -g apify-cli
  3. Log in to your Apify account: apify login

Deployment Steps

  1. Initialize your project folder (if you haven't already):

    apify init github-scraper
  2. Modify the Dockerfile to use Python:

    1FROM apify/actor-python:3.9
    2
    3# Copy source code
    4COPY . ./
    5
    6# Install dependencies
    7RUN pip install --no-cache-dir -r requirements.txt
    8
    9# Define how to run the actor
    10CMD ["python3", "apify_actor.py"]
  3. Push your actor to Apify:

    apify push
  4. After pushing, your actor will be available in the Apify Console.

Running on Apify

  1. Navigate to your actor in the Apify Console
  2. Click on "Run" in the top-right corner
  3. Enter the GitHub repository URLs you want to scrape in the Input form
  4. Click "Run" to start the actor
  5. Access the results in the "Dataset" tab once the run is complete

Input Options

  • repoUrls (required): Array of GitHub repository URLs to scrape
  • sleepBetweenRequests (optional): Delay between requests in seconds (default: 3)

Example Input

1{
2  "repoUrls": [
3    "https://github.com/microsoft/playwright",
4    "https://github.com/facebook/react",
5    "https://github.com/tensorflow/tensorflow"
6  ],
7  "sleepBetweenRequests": 5
8}

Output Format

The actor provides clean, well-structured data for each GitHub repository in the following format:

1{
2  "url": "https://github.com/microsoft/playwright",
3  "name": "playwright",
4  "owner": "microsoft",
5  "fullName": "microsoft/playwright",
6  "description": "Playwright is a framework for Web Testing and Automation. It allows testing Chromium, Firefox and WebKit with a single API.",
7  "stats": {
8    "stars": "71.2k",
9    "forks": "4k"
10  },
11  "language": "TypeScript",
12  "topics": [
13    "electron",
14    "javascript",
15    "testing",
16    "firefox",
17    "chrome",
18    "automation",
19    "web",
20    "test",
21    "chromium",
22    "test-automation",
23    "testing-tools",
24    "webkit",
25    "end-to-end-testing",
26    "e2e-testing",
27    "playwright"
28  ],
29  "lastUpdated": "2025-03-17T17:00:47Z",
30  "license": "Apache-2.0 license"
31}

Output Fields:

FieldTypeDescription
urlStringThe full URL of the GitHub repository
nameStringRepository name (without owner)
ownerStringUsername or organization that owns the repository
fullNameStringComplete repository identifier (owner/name)
descriptionStringRepository description
stats.starsStringNumber of stars the repository has
stats.forksStringNumber of forks the repository has
languageStringPrimary programming language
topicsArrayList of topics/tags associated with the repository
lastUpdatedStringISO timestamp of the last update
licenseStringRepository license information

This structured output format makes it easy to:

  • Display repository cards in your applications
  • Create data visualizations
  • Filter and sort repositories by various attributes
  • Export to other data formats

Frequently Asked Questions

Is it legal to scrape job listings or public data?

Yes, if you're scraping publicly available data for personal or internal use. Always review Websute's Terms of Service before large-scale use or redistribution.

Do I need to code to use this scraper?

No. This is a no-code tool — just enter a job title, location, and run the scraper directly from your dashboard or Apify actor page.

What data does it extract?

It extracts job titles, companies, salaries (if available), descriptions, locations, and post dates. You can export all of it to Excel or JSON.

Can I scrape multiple pages or filter by location?

Yes, you can scrape multiple pages and refine by job title, location, keyword, or more depending on the input settings you use.

How do I get started?

You can use the Try Now button on this page to go to the scraper. You’ll be guided to input a search term and get structured results. No setup needed!