109

I have been using sshfs to work remotely, but it is really slow and annoying, particularly when I use eclipse on it.

Is there any faster way to mount the remote file system locally? My no.1 priority is speed.

Remote machine is Fedora 15, local machine is Ubuntu 10.10. I can also use Windows XP locally if necessary.

0

14 Answers 14

25

sshfs is using the SSH file transfer protocol, which means encryption.

If you just mount via NFS, it's of course faster, because not encrypted.

are you trying to mount volumes on the same network? then use NFS.

6
  • 54
    It's not slow because of the encryption, it's slow because it's FUSE and it keeps checking the file system state.
    – w00t
    Commented May 19, 2013 at 13:40
  • 3
    @w00t I don't think that it's FUSE slowing it down, and not the encryption. Changing the encryption to arcfour sped it up for me, whereas using scp was just as slow as sshfs.
    – Sparhawk
    Commented Sep 28, 2013 at 4:57
  • 31
    @Sparhawk there's a difference between throughput and latency. FUSE gives you pretty high latency because it has to check the filesystem state a lot using some pretty inefficient means. arcfour gives you good throughput because the encryption is simpler. In this case latency is most important because that's what causes the editor to be slow at listing and loading files.
    – w00t
    Commented Sep 29, 2013 at 11:16
  • 3
    @w00t. Ah okay. Good points.
    – Sparhawk
    Commented Sep 29, 2013 at 12:42
  • I think NFS traffic can be transferred over SSH pipe so that should result in the best performance if you can use NFS on the remote localhost. The problem with sshfs is that's technically running SFTP and fuse just pretends to be a POSIX compatible filesystem over that protocol. The SFTP protocol doesn't support any fancy features and as a result, any protocol over that will end up having pretty poor overall performance. If you replace SFTP with NFS and keep encryption, it will be much faster. Commented Sep 18, 2020 at 8:06
57

If you need to improve the speed for sshfs connections, try these options:

oauto_cache,reconnect,defer_permissions,noappledouble,nolocalcaches,no_readahead

command would be:

sshfs remote:/path/to/folder local -oauto_cache,reconnect,defer_permissions
12
  • 3
    Thanks, worked for me! Had to remove defer_permissions though (unknown option). Commented Mar 10, 2015 at 11:35
  • 6
    Won't nolocalcaches decrease performance by forcing lookups every operation? Does this contradict auto_cache?
    – earthmeLon
    Commented Jun 15, 2015 at 18:13
  • 1
    The way I read the docs, nolocalcaches only disables the kernel side of things, sshfs still has its own cache. I could imagine that the kernel level checks are tuned for "real" file systems and as such more extensive. On the sshfs side "cache_timeout" looks promising, too. Here's a list: saltycrane.com/blog/2010/04/notes-sshfs-ubuntu ... lots of good stuff. :-)
    – Mantriur
    Commented Oct 29, 2015 at 17:21
  • 3
    nolocalcaches and defer_permissions don't seem valid (anymore?) on Debian Jessie.
    – Mantriur
    Commented Oct 29, 2015 at 17:31
  • 6
    Why no_readahead?
    – studgeek
    Commented Aug 9, 2016 at 0:40
22

Besides already proposed solutions of using Samba/NFS, which are perfectly valid, you could also achieve some speed boost sticking with sshfs by using quicker encryption (authentication would be as safe as usual, but transfered data itself would be easier to decrypt) by supplying -o Ciphers=arcfour option to sshfs. It is especially useful if your machine has weak CPU.

4
  • -oCipher=arcfour made no difference in my tests with a 141 MB file created from random data.
    – Sparhawk
    Commented Sep 28, 2013 at 4:39
  • 8
    That's because there were multiple typos in the command. I've edited it. I noticed a 15% speedup from my raspberry pi server. (+1)
    – Sparhawk
    Commented Sep 28, 2013 at 4:56
  • 6
    The [email protected] cipher is also an option worth considering now arcfour is obsolete. Chacha20 is faster on ARM processors than AES but far worse on x86 processors with AES instructions (which all modern desktop CPUs have as standard these days). klingt.net/blog/ssh-cipher-performance-comparision You can list supported ciphers with "ssh -Q cipher"
    – TimSC
    Commented Nov 20, 2017 at 20:48
  • This is not doable anymore as the fastest ciphers (eg arcfour) have now been permanently removed from recent SSH versions Commented Jan 7, 2022 at 10:04
15

I do not have any alternatives to recommend, but I can provide suggestions for how to speed up sshfs:

sshfs -o cache_timeout=115200 -o attr_timeout=115200 ...

This should avoid some of the round trip requests when you are trying to read content or permissions for files that you already retrieved earlier in your session.

sshfs simulates deletes and changes locally, so new changes made on the local machine should appear immediately, despite the large timeouts, as cached data is automatically dropped.

But these options are not recommended if the remote files might be updated without the local machine knowing, e.g. by a different user, or a remote ssh shell. In that case, lower timeouts would be preferable.

Here are some more options I experimented with, although I am not sure if any of them made a differences:

sshfs_opts="-o auto_cache -o cache_timeout=115200 -o attr_timeout=115200   \
-o entry_timeout=1200 -o max_readahead=90000 -o large_read -o big_writes   \
-o no_remote_lock"

You should also check out the options recommended by Meetai in his answer.

Recursion

The biggest problem in my workflow is when I try to read many folders, for example in a deep tree, because sshfs performs a round trip request for each folder separately. This may also be the bottleneck that you experience with Eclipse.

Making requests for multiple folders in parallel could help with this, but most apps don't do that: they were designed for low-latency filesystems with read-ahead caching, so they wait for one file stat to complete before moving on to the next.

Precaching

But something sshfs could do would be to look ahead at the remote file system, collect folder stats before I request them, and send them to me when the connection is not immediately occupied. This would use more bandwidth (from lookahead data that is never used) but could improve speed.

We can force sshfs to do some read-ahead caching, by running this before you get started on your task, or even in the background when your task is already underway:

find project/folder/on/mounted/fs > /dev/null &

That should pre-cache all the directory entries, reducing some of the later overhead from round trips. (Of course, you need to use the large timeouts like those I provided earlier, or this cached data will be cleared before your app accesses it.)

But that find will take a long time. Like other apps, it waits for the results from one folder before requesting the next one.

It might be possible to reduce the overall time by asking multiple find processes to look into different folders. I haven't tested to see if this really is more efficient. It depends whether sshfs allows requests in parallel. (I think it does.)

find project/folder/on/mounted/fs/A > /dev/null &
find project/folder/on/mounted/fs/B > /dev/null &
find project/folder/on/mounted/fs/C > /dev/null &

If you also want to pre-cache file contents, you could try this:

tar c project/folder/on/mounted/fs > /dev/null &

Obviously this will take much longer, will transfer a lot of data, and requires you to have a huge cache size. But when it's done, accessing the files should feel nice and fast.

1
  • 1
    If you want to read file contents to get it into cache the wc -l is pretty good. It just counts occurrences of 0x10 in the file so it simply reads the file once without outputting the contents. Commented Mar 26, 2020 at 15:04
9

I found turning off my zsh theme that was checking git file status helped enourmously - just entering the directory was taking 10+ minutes. Likewise turning off git status checkers in Vim.

2
  • Wow, this is a really good tip! Commented Aug 7, 2019 at 5:16
  • Yes, this was the one, thanks!
    – Emile 81
    Commented May 4, 2021 at 10:08
8

After searching and trial. I just found add -o Compression=no speed it a lot. The delay may be caused by the compression and uncompression process. Besides, use 'Ciphers=aes128-ctr' seems faster than others while some post has done some experiments on this. Then, my command is somehow like this:

sshfs -o allow_other,transform_symlinks,follow_symlinks,IdentityFile=/Users/maple/.ssh/id_rsa -o auto_cache,reconnect,defer_permissions -o Ciphers=aes128-ctr -o Compression=no [email protected]:/home/maple ~/mntpoint

1
  • 1
    fun enough Compression=yes seems to speed it up for me while all the others didn't seem to make a difference
    – Fuseteam
    Commented Jun 17, 2021 at 13:50
4

SSHFS is really slow because it transfers the file contents even if it does not have to (when doing cp). I reported this upstream and to Debian, but no response :/

1
  • 5
    It is efficient with mv. Unfortunately when you run cp locally, FUSE only sees requests to open files for reading and writing. It does not know that you are making a copy of a file. To FUSE it looks no different from a general file write. So I fear this cannot be fixed unless the local cp is made more FUSE-aware/FUSE-friendly. (Or FUSE might be able to send block hashes instead of entire blocks when it suspects a cp, like rsync does, but that would be complex and might slow other operations down.) Commented Sep 8, 2016 at 5:00
4

I've been doing testing with various tools on MacOS 12.1 on an M1 mac and wanted to share some possibly helpful results.

Short Version: Try using rclone mount instead of sshfs. This enabled me to get full gigabit speed both up and down.

A little more about my experience and testing:

Setup: M1 Mac connected over gigabit ethernet to a server running Rocky 8, with a big high speed raid filesystem. Speeds below will be in MB/s, so wire speed would be about 125 MB/s (1 Gb/S).

For me, default settings of sshfs gave me ~30 MB/s down from the server and full 120 MB/s up. Using the option -o Ciphers=aes128-ctr increased that to about 50MB/s down (arcfour is no longer supported on open SSH, so didn't work).

Using rclone mount, I was able to get full 120+ MB/s both up and down, and the mount has otherwise worked great so far as well.

Most other non-mount tools I tried gave me roughly full wire speed up and down (Forklift, command line sftp, filezilla, rclone copy, rsync).

Cyberduck gave me very slow performance up and down, ~15 MB/s, I suspect due to compression that I have not been able to figure out how to turn off.

1
  • One current issue with rclone is that it doesn't support symlinks: they can either be ignored, or treated as the files they're pointing to. That's probably a non-issue for media files, but it breaks my attempted use as a temp folder :(
    – Warbo
    Commented Aug 24, 2023 at 18:09
2

NFS should be faster. How remote is the filesystem? If it's over the WAN, you might be better off just syncing the files back and forth, as opposed to direct remote access.

1

Either NFS or Samba if you have large files. Using NFS with something like 720p Movies and crap is really a PITA. Samba will do a better job, tho i dislike Samba for a number of other reasons and i wouldn't usually recommend it.

For small files, NFS should be fine.

1

New option: max_conns

Since version 3.7.0 sshfs includes an option called max_conns.

This option has the potential to greatly improve your performance.

Check your sshfs version with the following command:

sshfs -V

If your version is >= 3.7.0, then consider adding the below parameters:

-o max_conns=4

Where 4 is the number of cores on your machine (you can check this with the below command):

# To retrieve the number of cores:
grep -c ^processor /proc/cpuinfo

NOTE

This might have an impact on the CPU load used by ssh / sshfs. If you do not want to saturate your CPU for disk access, consider using a lower connection count.

2
  • Your answer could be improved with additional supporting information. Please edit to add further details, such as citations or documentation, so that others can confirm that your answer is correct. You can find more information on how to write good answers in the help center.
    – Community Bot
    Commented Feb 19, 2023 at 15:27
  • Note that on Linux you have nproc which gives you the number of processors without the need for /proc. Commented Mar 25 at 20:50
0

I use plain SFTP. I did it primarily to cut out unneeded authentication but I am sure that dropping the layer of encryption helps, too. (Yes, I need to benchmark it.)

I describe a trivial usage here: https://www.quora.com/How-can-I-use-SFTP-without-the-overhead-of-SSH-I-want-a-fast-and-flexible-file-server-but-I-dont-need-encryption-or-authentication

0

sshfs is for sure not a very performant way to mount a remote file system in general and other options are often faster. However, if you experience incredibly sluggish performance it might be that some I/O is happening over the SSH connection that you are not aware of.

To investigate what is happening, you can mount with sshfs -d which will spawn sshfs in the foreground, but it will then display debugging information so you can see what kind of requests are being done to the remote host. This will help you understand what is happening and see if any of the I/O should be happening in the first place.

This is not relevant to the question, but here's what my problem was specifically: A simple ls was taking 8 seconds to complete. I found out using the debug mode that during the ls command there were requests like /libselinux.so.1 and /libpcre.so.3, etc. This made no sense to me. I then figured out that my LD_LIBRARY_PATH variable contained a trailing :, thus it essentially contained an empty entry, which caused it to look up shared libraries over SSHFS.

0

@meetai.com 's answer was pure magic for me...

i'm on linux mint cinnamon 20.0 right now... just to add on to the answer, here is a little script i jacked meetai's solution to - a list of hosts in config file pop to select from - my two cents.

#!/bin/bash

# list hosts segregating aliases from user's ssh config file
hosts="$(grep -P "^Host ([^*]+)$" $HOME/.ssh/config | sed 's/Host //')"

# select host from list
select host in ${hosts}; do echo "You selected ${host}"; break; done

# call sshfs to mount host
sshfs $host:/ ~/mnt/$host -oauto_cache,reconnect,no_readahead
2

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .