Hey!

I created a docker container with a pyhton app with the help of deepseek, doing the first download run and it's downloading the directory. It's slow because it has to scrape and then download the files one by one. I also added some basic resume functionality (it checks the download folder for items and skips them).
Here's the basic code, I will try to upload it to github for ease:

1. Directory Structure

Ensure your directory structure looks like this:
(Create a directory and create all these files in it)
pcloud/
├── docker-compose.yml
├── Dockerfile
├── download_folder.py
└── downloaded_files/ (this will be created automatically)


2. Docker Compose File

Create a folder called pcloud and create a docker-compose.yml file in pcloud with the following content:
Run "nano docker-compose.yml"
Paste these and save

services:
pcloud-downloader:
build: .
container_name: pcloud-downloader
volumes:
- ./downloaded_files:/app/downloaded_files


3. Dockerfile

Ensure your Dockerfile is in the same directory and contains the following:

Dockerfile

FROM python:3.9-slim

WORKDIR /app

COPY download_folder.py .

RUN pip install requests beautifulsoup4

RUN mkdir -p /app/downloaded_files

CMD ["python", "download_folder.py"]


4. Python Script

Create a file download_folder.py 

nano download_folder.py 

with following contents (PASTE THE FOLLOWING INTO IT AND SAVE):

import os
import requests
from bs4 import BeautifulSoup
from urllib.parse import urljoin, unquote
import json
import time
import logging

base_url = "https://filedn.com/lgm4rog8XwDbvwRIvGBXqry/"
download_folder = "downloaded_files"
log_file = os.path.join(download_folder, "download_errors.log")
if not os.path.exists(download_folder):
os.makedirs(download_folder)

logging.basicConfig(filename=log_file, level=logging.ERROR, format='%(asctime)s - %(message)s')

def download_file(url, folder, retries=10):
"""Download a file from the given URL and save it to the specified folder."""
local_filename = os.path.join(folder, unquote(url.split('/')[-1]))
for attempt in range(retries):
try:
with requests.get(url, stream=True) as r:
r.raise_for_status()
with open(local_filename, 'wb') as f:
for chunk in r.iter_content(chunk_size=8192):
f.write(chunk)
print(f"Downloaded: {local_filename}")
return local_filename
except (requests.exceptions.ChunkedEncodingError, requests.exceptions.ConnectionError) as e:
print(f"Attempt {attempt + 1} failed for {url}: {e}")
if attempt < retries - 1:
time.sleep(5) # Wait 5 seconds before retrying
else:
logging.error(f"Failed to download {url} after {retries} attempts.")
print(f"Failed to download {url} after {retries} attempts.")
return None
except Exception as e:
logging.error(f"Error downloading {url}: {e}")
print(f"Error downloading {url}: {e}")
return None

def scrape_and_download(url, folder):
"""Scrape the folder and download all files and subfolders."""
response = requests.get(url)
soup = BeautifulSoup(response.text, 'html.parser')

# Extract the directLinkData JavaScript object
script_tag = soup.find('script', text=lambda x: x and 'var directLinkData' in x)
if not script_tag:
    print("Error: Could not find directLinkData in the HTML.")
    return

# Extract the JSON part of the script
script_text = script_tag.string
json_data = script_text.split('var directLinkData=')[1].strip().rstrip(';')

# Parse the JSON data
try:
    data = json.loads(json_data)
except json.JSONDecodeError as e:
    print(f"Error parsing JSON: {e}")
    return

# Process each item in the folder
for item in data['content']:
    item_name = unquote(item['urlencodedname'])
    item_url = urljoin(url, item['urlencodedname'])
    if 'size' in item:  # It's a file
        local_path = os.path.join(folder, item_name)
        if os.path.exists(local_path):  # Skip if file already exists
            print(f"Skipping already downloaded file: {item_name}")
            continue
        print(f"Downloading file: {item_name}")
        download_file(item_url, folder)
    else:  # It's a folder
        subfolder_path = os.path.join(folder, item_name)
        if not os.path.exists(subfolder_path):
            os.makedirs(subfolder_path)
        print(f"Entering folder: {item_name}")
        scrape_and_download(item_url + '/', subfolder_path)  # Recursively process subfolder

scrape_and_download(base_url, download_folder)
print("Download complete!")


5. Run Docker Compose

Navigate to the pcloud directory and run the following command:

docker-compose up

Edit Report
Pub: 22 Mar 2025 13:42 UTC
Edit: 22 Mar 2025 14:08 UTC
Views: 69