Python Program to Download File from URL
requests library to download a file from a URL by calling requests.get(url) and saving the content with open(filename, 'wb').Examples
How to Think About It
Algorithm
Code
import requests def download_file(url, filename): response = requests.get(url) response.raise_for_status() # Check for request errors with open(filename, 'wb') as f: f.write(response.content) print(f"File '{filename}' downloaded successfully.") # Example usage download_file('https://www.w3.org/TR/PNG/iso_8859-1.txt', 'textfile.txt')
Dry Run
Let's trace downloading 'https://www.w3.org/TR/PNG/iso_8859-1.txt' to 'textfile.txt' through the code.
Send GET request
Call requests.get with URL 'https://www.w3.org/TR/PNG/iso_8859-1.txt', response received with status 200.
Check for errors
response.raise_for_status() confirms no HTTP errors.
Open file
Open 'textfile.txt' in write-binary mode.
Write content
Write response.content (file data) into 'textfile.txt'.
Close file and print
File is closed automatically; print success message.
| Step | Action | Value |
|---|---|---|
| 1 | GET request URL | https://www.w3.org/TR/PNG/iso_8859-1.txt |
| 2 | Response status | 200 OK |
| 3 | Open file | textfile.txt (wb) |
| 4 | Write bytes | response.content (file data) |
| 5 | Print message | File 'textfile.txt' downloaded successfully. |
Why This Works
Step 1: Send HTTP GET request
The requests.get(url) function fetches the file data from the internet.
Step 2: Check for errors
Using raise_for_status() stops the program if the URL is invalid or unreachable.
Step 3: Write file in binary mode
Opening the file with 'wb' mode allows saving any file type exactly as received.
Step 4: Save content to file
Writing response.content stores the downloaded bytes into the file.
Alternative Approaches
import urllib.request url = 'https://www.w3.org/TR/PNG/iso_8859-1.txt' filename = 'textfile_urllib.txt' urllib.request.urlretrieve(url, filename) print(f"File '{filename}' downloaded successfully.")
import requests def download_file_stream(url, filename): with requests.get(url, stream=True) as r: r.raise_for_status() with open(filename, 'wb') as f: for chunk in r.iter_content(chunk_size=8192): f.write(chunk) print(f"File '{filename}' downloaded successfully.") # Example usage download_file_stream('https://www.w3.org/TR/PNG/iso_8859-1.txt', 'textfile_stream.txt')
Complexity: O(n) time, O(n) space
Time Complexity
The time depends on the file size n because the program reads and writes all bytes once.
Space Complexity
The program stores the entire file content in memory before writing, so space is proportional to file size n.
Which Approach is Fastest?
Using streaming with chunked writes reduces memory use but has similar time complexity; urllib.request.urlretrieve is simpler but less flexible.
| Approach | Time | Space | Best For |
|---|---|---|---|
| requests.get with content | O(n) | O(n) | Small to medium files, simple code |
| requests with streaming | O(n) | O(1) | Large files, memory efficient |
| urllib.request.urlretrieve | O(n) | O(n) | Quick scripts, no extra libraries |
raise_for_status() to avoid saving incomplete files.'wb', causing corrupted downloads.