Instagram Reel Downloader

Instagram Reel Downloader

This actor scrapes Instagram Reels from provided URLs and retrieves metadata such as captions, likes, comments, and video URLs. The scraped data is saved in a dataset, making it easy to export the results as JSON, CSV, or HTML.

SOCIAL_MEDIAVIDEOSDEVELOPER_TOOLSApify

Steps to Add and Push the README.md File

  1. Create the README File:

    • In your actor’s directory (e.g., C:\Users\shubh\apify-actor-instaloader), create a new file named README.md.
  2. Paste the Content:

    • Open the file in your code editor (like VSCode or Notepad++).
    • Copy the content below and paste it into the file.

1# Instagram Reel Downloader
2
3## Overview
4
5The **Instagram Reel Downloader** actor allows users to scrape Instagram reels using their URLs. It extracts useful information like:
6
7- Captions
8- Likes
9- Comments
10- Owner's Username
11- Video URLs
12
13The scraped data is saved to a dataset, which can be exported in formats like JSON, CSV, or Excel.
14
15---
16
17## Features
18
19- **Scrape Instagram Reels**: Fetch metadata and video URLs from public Instagram reel links.
20- **Exportable Data**: The output can be exported to JSON, CSV, or other formats.
21- **User-Friendly Input/Output**: Accepts URLs in JSON format and saves results to Apify datasets.
22
23---
24
25## Input
26
27The actor requires a JSON input in the following format:
28
29```json
30{
31  "reelLinks": [
32    "https://www.instagram.com/reel/XXXXXXX/",
33    "https://www.instagram.com/reel/YYYYYYY/"
34  ]
35}
  • reelLinks: Array of Instagram reel URLs to scrape (must be public).

Output

The output is saved in the Apify default dataset and includes the following fields:

FieldDescription
captionCaption text of the reel.
likesNumber of likes on the reel.
commentsNumber of comments on the reel.
owner_usernameUsername of the reel owner.
video_urlDirect URL to the reel video file.

How to Use

  1. Input Data:

    • Provide a JSON input containing the Instagram reel links.
    • Example:
      1{
      2  "reelLinks": [
      3    "https://www.instagram.com/reel/XXXXX/",
      4    "https://www.instagram.com/reel/YYYYY/"
      5  ]
      6}
  2. Run the Actor:

    • Navigate to the Input tab.
    • Paste the input JSON.
    • Click Run.
  3. View and Export Results:

    • Go to the Dataset tab after the run.
    • Export the results in your desired format (JSON, CSV, or Excel).

Limitations

  • Only works with public Instagram reels.
  • The actor may hit rate limits depending on Instagram's restrictions.

Development Notes

Local Testing

To test the actor locally:

  1. Navigate to the actor's directory:
    cd path/to/your/actor
  2. Run the actor:
    apify run --purge

Updating the Actor

To push updates:

apify push

Contribution

Feel free to fork the repository, improve the actor, and submit a pull request. Contributions are welcome!


Support

For questions or issues, contact support through the Apify console or open an issue in the repository.

1---
2
33. **Save the File**:
4   - Save the file as `README.md` in the same directory as your `main.py`.
5
64. **Push to Apify**:
7   - Navigate to the directory in your terminal and run:
8     ```bash
9     apify push
10     ```

Frequently Asked Questions

Is it legal to scrape job listings or public data?

Yes, if you're scraping publicly available data for personal or internal use. Always review Websute's Terms of Service before large-scale use or redistribution.

Do I need to code to use this scraper?

No. This is a no-code tool — just enter a job title, location, and run the scraper directly from your dashboard or Apify actor page.

What data does it extract?

It extracts job titles, companies, salaries (if available), descriptions, locations, and post dates. You can export all of it to Excel or JSON.

Can I scrape multiple pages or filter by location?

Yes, you can scrape multiple pages and refine by job title, location, keyword, or more depending on the input settings you use.

How do I get started?

You can use the Try Now button on this page to go to the scraper. You’ll be guided to input a search term and get structured results. No setup needed!