FAQ
Why is rclone slow?
Common reasons for slow transfers and how to fix them
Quick diagnostics
Use rclone's built-in speed test to measure upload and download speeds to your remote:
rclone test speed remote: -qFor more detailed output, omit the -q flag.
Basics
Try these first
- ✅ Increase transfers:
--transfers 16 - ✅ Use fast-list:
--fast-list - ✅ Increase chunk size:
--drive-chunk-size 128M - ✅ Add more checkers:
--checkers 16 - ✅ Check your internet: Run speed test
- ✅ Try ethernet: If on WiFi
- ✅ Check time of day: ISP throttling?
- ✅ Update rclone:
rclone selfupdate
When it's not rclone's fault
- Your internet: Most common limitation
- Provider limits: Free tiers are often throttled
- Peak hours: Evenings are slower
- Endpoint distance: Far servers = higher latency
- Hardware: Old computer/NAS may be the limit
Realistic uploads speeds
| Connection | Theoretical | Real-world rclone |
|---|---|---|
| 100 Mbps | 12.5 MB/s | 8-10 MB/s |
| 1 Gbps | 125 MB/s | 60-100 MB/s |
| 10 Gbps | 1250 MB/s | 300-600 MB/s |
Common causes & fixes
Conservative defaults
Optimize for your connection by increasing the number of transfers, checkers, and using fast list.
rclone copy /local remote: \
--transfers 32 \
--checkers 16 \
--fast-listMany small files
Each file needs a separate API call, so increase parallelization.
# For many small files (photos, documents)
rclone copy /local remote: \
--transfers 32 \
--checkers 16 \
--tpslimit 12 \
--fast-listOr zip them first:
# Bundle small files
tar czf archive.tar.gz small-files/
rclone copy archive.tar.gz remote:Large files on slow connection
Use chunked transfers to upload large files faster.
# For Google Drive
rclone copy /local remote: \
--drive-chunk-size 128M \
--transfers 4
# For S3/B2
rclone copy /local remote: \
--s3-chunk-size 128M \
--s3-upload-concurrency 4You can see specific optimizations below for some of the providers.
Provider-specific optimizations
Google Drive
--drive-chunk-size 128M— larger chunks for faster uploads--drive-acknowledge-abuse— bypass false-positive malware warnings--drive-impersonate user@domain.com— for Google Workspace (higher limits)
Dropbox
--dropbox-chunk-size 48M— optimal chunk size for Dropbox--dropbox-skip-hash-check— faster but less safe
OneDrive
--onedrive-chunk-size 100M— larger chunks for faster uploads
S3/B2/Wasabi
--s3-chunk-size 128M— larger chunks for big files--s3-upload-concurrency 10— parallel chunk uploads--s3-disable-checksum— skip checksums for speed
Advanced optimizations
Mount + Rsync
For many small files:
# Mount the remote
rclone mount remote: /mnt/remote --daemon
# Use rsync for better small file handling
rsync -av --progress /local/files/ /mnt/remote/VFS Caching
# Cache for repeated access
rclone mount remote: /mnt/remote \
--vfs-cache-mode full \
--vfs-cache-max-size 100G \
--vfs-read-chunk-size 128MGoogle Drive Service Accounts
# Rotate between service accounts
# to avoid the 750GB/day limit
rclone copy /local remote: \
--drive-service-account-file sa1.jsonCompression
# Compress on-the-fly (for text files)
rclone copy /local remote: \
--compress-level 9Getting help
If still slow after trying everything:
# Generate debug info
rclone version
rclone test info remote:
rclone copy /test remote: -vv --dump headers
# Post on forum with:
# - Your config (redacted)
# - Debug output
# - Internet speed
# - What you've triedRemember: "Slow" is relative. What matters is if it's slower than it should be for your connection!
How is this guide?