FAQ

Why is rclone slow?

Common reasons for slow transfers and how to fix them

Quick diagnostics

Use rclone's built-in speed test to measure upload and download speeds to your remote:

rclone test speed remote: -q

For more detailed output, omit the -q flag.

Basics

Try these first

  1. Increase transfers: --transfers 16
  2. Use fast-list: --fast-list
  3. Increase chunk size: --drive-chunk-size 128M
  4. Add more checkers: --checkers 16
  5. Check your internet: Run speed test
  6. Try ethernet: If on WiFi
  7. Check time of day: ISP throttling?
  8. Update rclone: rclone selfupdate

When it's not rclone's fault

  • Your internet: Most common limitation
  • Provider limits: Free tiers are often throttled
  • Peak hours: Evenings are slower
  • Endpoint distance: Far servers = higher latency
  • Hardware: Old computer/NAS may be the limit

Realistic uploads speeds

ConnectionTheoreticalReal-world rclone
100 Mbps12.5 MB/s8-10 MB/s
1 Gbps125 MB/s60-100 MB/s
10 Gbps1250 MB/s300-600 MB/s

Common causes & fixes

Conservative defaults

Optimize for your connection by increasing the number of transfers, checkers, and using fast list.

rclone copy /local remote: \
  --transfers 32 \
  --checkers 16 \
  --fast-list

Many small files

Each file needs a separate API call, so increase parallelization.

# For many small files (photos, documents)
rclone copy /local remote: \
  --transfers 32 \
  --checkers 16 \
  --tpslimit 12 \
  --fast-list

Or zip them first:

# Bundle small files
tar czf archive.tar.gz small-files/
rclone copy archive.tar.gz remote:

Large files on slow connection

Use chunked transfers to upload large files faster.

# For Google Drive
rclone copy /local remote: \
  --drive-chunk-size 128M \
  --transfers 4

# For S3/B2
rclone copy /local remote: \
  --s3-chunk-size 128M \
  --s3-upload-concurrency 4

You can see specific optimizations below for some of the providers.

Provider-specific optimizations

Google Drive

  • --drive-chunk-size 128M — larger chunks for faster uploads
  • --drive-acknowledge-abuse — bypass false-positive malware warnings
  • --drive-impersonate user@domain.com — for Google Workspace (higher limits)

Dropbox

  • --dropbox-chunk-size 48M — optimal chunk size for Dropbox
  • --dropbox-skip-hash-check — faster but less safe

OneDrive

  • --onedrive-chunk-size 100M — larger chunks for faster uploads

S3/B2/Wasabi

  • --s3-chunk-size 128M — larger chunks for big files
  • --s3-upload-concurrency 10 — parallel chunk uploads
  • --s3-disable-checksum — skip checksums for speed

Advanced optimizations

Mount + Rsync

For many small files:

# Mount the remote
rclone mount remote: /mnt/remote --daemon

# Use rsync for better small file handling
rsync -av --progress /local/files/ /mnt/remote/

VFS Caching

# Cache for repeated access
rclone mount remote: /mnt/remote \
  --vfs-cache-mode full \
  --vfs-cache-max-size 100G \
  --vfs-read-chunk-size 128M

Google Drive Service Accounts

# Rotate between service accounts
# to avoid the 750GB/day limit
rclone copy /local remote: \
  --drive-service-account-file sa1.json

Compression

# Compress on-the-fly (for text files)
rclone copy /local remote: \
  --compress-level 9

Getting help

If still slow after trying everything:

# Generate debug info
rclone version
rclone test info remote:
rclone copy /test remote: -vv --dump headers

# Post on forum with:
# - Your config (redacted)
# - Debug output
# - Internet speed
# - What you've tried

Remember: "Slow" is relative. What matters is if it's slower than it should be for your connection!

How is this guide?