Fnac Data Scraping

Fnac Data Scraping

Scrape Fnac product data —including name, price, availability, EAN, SKU, images, seller, categories, delivery info, ratings, and reviews—from both search and product pages with seamless proxy support.

ECOMMERCELEAD_GENERATIONAUTOMATIONApify

Fnac.com Scraper

A powerful Apify actor for scraping product data from Fnac.com, supporting both search result pages and product pages. The scraper retrieves product names, prices, availability, EAN, SKU, images, seller details, categories, delivery info, ratings, and reviews. It leverages HTTPX for fast requests and BeautifulSoup for parsing.

Features

  • Scrape product & search pages – Extracts product details from both search listings and individual product pages.
  • Apify SDK integration – Uses Apify's request queue and dataset storage for efficient crawling.
  • Proxy support – Use Apify's default proxies or provide a custom one.
  • Custom depth control – Define how deep to crawl from the start URLs.

How it Works

  1. The script reads input, including start_urls and url_type (search or product).
  2. It enqueues URLs into Apify's request queue and processes them one by one.
  3. The scraper fetches and parses each page using HTTPX and BeautifulSoup.
  4. Extracted data is structured and stored in Apify's dataset.
  5. Supports pagination when scraping search result pages.

Example Output

1{
2    "url": "https://www.fnac.com/Product-Page",
3    "name": "Laptop XYZ",
4    "ean": "1234567890123",
5    "sku": "ABC123",
6    "price_product_discount": "799.99",
7    "price_product": "899.99",
8    "availability": "In Stock",
9    "description": "Powerful laptop with Intel i7 processor...",
10    "reconditionn": "No",
11    "etat": "New",
12    "images": ["image1.jpg", "image2.jpg"],
13    "seller": "Fnac",
14    "categories":["Computers", "Laptops"]
15    "deliveryInfo__price": "Free",
16    "deliveryInfo__date": "2-3 days",
17    "rating": "4.5",
18    "review_count": "120"
19}

Getting Started

To run the scraper on Apify:

  1. Build the Actor in Apify Console.
  2. Run the Actor with your desired start_urls.

Local Development

  1. Install Apify CLI:

    npm install -g apify-cli
  2. Pull the Actor for local testing:

    apify pull <ActorId>
  3. Run the Actor locally:

    apify run

Let me know if you need any tweaks! 🚀

Frequently Asked Questions

Is it legal to scrape job listings or public data?

Yes, if you're scraping publicly available data for personal or internal use. Always review Websute's Terms of Service before large-scale use or redistribution.

Do I need to code to use this scraper?

No. This is a no-code tool — just enter a job title, location, and run the scraper directly from your dashboard or Apify actor page.

What data does it extract?

It extracts job titles, companies, salaries (if available), descriptions, locations, and post dates. You can export all of it to Excel or JSON.

Can I scrape multiple pages or filter by location?

Yes, you can scrape multiple pages and refine by job title, location, keyword, or more depending on the input settings you use.

How do I get started?

You can use the Try Now button on this page to go to the scraper. You’ll be guided to input a search term and get structured results. No setup needed!