7

For over a decade now I have been working in a full Debian environment in my small office (currently 1 server, 7 users, 3 desktops, 4 laptops). Authentication is based on Kerberos, user profiles are managed in LDAP, and $HOME is served to all of the clients over NFSv4 with the help of pam_mount or autofs. This setup is very much ok for desktop users working on the local lan.

Two years ago, I started using the same setup for laptop users. The WiFi connection caused some additional sluggishness, and for sure once the users tried to use the laptop outside of the office, things got really slow. Optimizing $XDG_{CACHE,DATA,CONFIG}_HOME and looking into specific optimizations for Firefox on NFS made things a bit better.

I'm now thinking of moving the $HOMES to the laptops+desktops. It is nice that a user can switch devices if one goes down, but that only happens once in a while. Sacrificing this flexibility for a faster day-to-day user experience seems like a good decision. If I could bi-directionally sync the local $HOME to the central server on startup and shutdown, there probably wouldn't be a tradeoff at all...

  • 'unison' seems like a good candidate for keeping the local $HOME in sync with a central copy, but it seems to require exact same versions between server and client, and that I cannot commit to.
  • 'lsyncd' seems to be a very good candidate, but I don't seem to find any user stories using the tool for their $HOME directories...
  • I even had a brief look at 'GlusterFS', but that seems like it's a non-trivial replacement. Anyone has any experience and maybe best practices to share? I don't mind a bit of experimenting, but I'm afraid I'm missing some obvious downsides to the above... Thx!
5
  • I believe the thing would be called "Roaming User Profiles" in the Microsoft Windows world, maybe that helps. Commented Dec 16, 2021 at 13:09
  • by the way, even for local NFS users, NFS+cachefilesd might be nice, reducing the network-dependence / -bottlenecking for repeatedly read files Commented Dec 16, 2021 at 13:10
  • 2
    @MarcusMüller, cachefilesd doesn't reduce the metadata calls over the network (GETATTR, READDIR, LOOKUP etc) and of course the writes. From my personal experience it's a great tool to reduce the load from the network / NFS server, but there isn't any improvement on the client side.
    – aviro
    Commented Dec 16, 2021 at 13:48
  • Replicating the files is not quite the same thing as replicating the data and configuration. There isn't really a framewaork for that. AFS comes a lot closer in intent than unison/rsync.
    – symcbean
    Commented Jan 10, 2022 at 1:02
  • @symcbean care to elaborate? I can understand that the data of a file is only one aspect, and that not all metadata is being sync'ed by rsync and the likes (maybe not even with advanced parameters such as -cHauXA). Would that 'incomplete replication' have consequences in situations where 1 user will never login on more device at the same time? Thx!
    – zenlord
    Commented Jan 10, 2022 at 12:41

4 Answers 4

4

I use Syncthing for setups similar to this, both for complete home directories and for subsets of home directories. It synchronises files, bidirectionally or unidirectionnally; it’s transparent if users never make changes to the same files in two different places, and even when there are conflicting changes it manages to cope well — it doesn’t lose data.

It works well with skewed versions, within limits. I have it running on a variety of phones and computers; phones track the latest version automatically, computers use whatever is in the installed distribution. Full home directory syncs only really work across computers with identical distributions (because of changes in configuration files), but partial syncs are fine with varying setups.

Syncthing works really well on LANs, but it also supports synchronising over the Internet, and is configurable so that you can tune it to whatever level of trust you’re interested in.

1
  • Thank you for your answer. I didn't add syncthing to my list because it was the most prevalent solution anywhere on the internet :). I did get the impression that it is used for server-less synchronisation, but one could easily consider my server as 'just another client' in a syncthing-setup. Syncing on LAN-only would be acceptable, I believe.
    – zenlord
    Commented Dec 18, 2021 at 6:33
4
+100

As @marcus-müller pointed out, this functionality is broadly referenced as 'roaming user profiles', and it is primarily built around two pillars:

  • User credentials
  • User $HOME directories

The first one is commonly addressed with centralized systems such as LDAP+Kerberos and with SSSD on the client. The second one can be addressed using NFSv4 and krb5/krb5i/krb5p, but it can result in sluggishness if your client is on WiFi or in a remote location.

If you want to move away from a centralized $HOME for your users, but do not want to give up the flexibility of a centralized setup entirely, a workaround could be:

  • Upon login, run a rsync script to (create the local $HOME if it does not exist and) bi-directionally sync the local $HOME with the central $HOME
  • Upon logout, run a rsync script to upload all local changes to the central $HOME
  • Optionally, perform intermittent syncs

GDM can be setup to run scripts at login (/etc/gdm/PostLogin/Default) and logout (/etc/gdm/PostSession/Default). A cronjob or systemd timer can be set up for the intermittent syncs.

Some caveats I have thought of:

  • Try to limit the size of the $HOME directories, e.g. with quota and a separate directory where the users can store shared documents (which can still be shared over NFS without the penalty of overall sluggishness)
  • Optimize the rsync scripts to omit cache and tmp directories (or maybe simply relocate the $XDG_CACHE_HOME and $XDG_DATA_HOME outside of $HOME)

/EDIT: I have published a shell script to help synchronise the local $HOME with the remote $HOME: https://github.com/zenlord/vagabond.sh - feel free to comment :)

2

Pulling the home directory from a central server during login and syncing it back at logout could also be done with PAM via its pam-script module - using it to call an appropriate rsync command.

4
  • How does that work on logout? Commented Dec 23, 2021 at 7:48
  • Honestly my PAM experience is very limited, but a seemingly nice example (for ssh logins) could be found on thegeekdiary.com/…
    – jf1
    Commented Dec 23, 2021 at 8:37
  • If you're using gdm on the clients, then you could use the /etc/gdm/PostLogin/Default and /etc/gdm/PostSession/Default scripts that gdm can execute at respectively login and logout. In my current workaround to allow for offline sessions, I use these scripts to create a home directory (autofs doesn't do this, whereas pam_mount did)
    – zenlord
    Commented Dec 23, 2021 at 10:05
  • @zenlord could you post that as an answer, please? Commented Dec 24, 2021 at 15:08
1

'unison' seems like a good candidate for keeping the local $HOME in sync with a central copy, but it seems to require exact same versions between server and client, and that I cannot commit to.

You can have multiple versions installed on the server. Set addversionno = true in the Unison profile to make it run the unison-VERSION on the server.

You can install multiple versions on the server. If you don't want the hassle of doing your own builds, both the official binaries and the Debian packages only depend on Glibc. The official packages are currently built on Ubuntu 18.04 but in practice they'll work even on older systems.

If you install your own versions, you'll of course have to monitor for security issues. I don't remember ever seeing a security advisory in Debian and there isn't one in the (relatively recent) Github issue tracker but that doesn't mean it won't happen.

Do note that not only the versions of Unison have to match, but the versions of OCaml they were built with might have to match as well. On the other hand, the operating systems and CPU architecture don't need to match.

So from a technical point of view, Unison would be fine. I'm not sure it's fine from a user point of view, however. When a conflict happens, the user has to say how to reconcile. This is inherent in any bidirectional synchronization system, so here the question isn't whether to use Unison, but whether to use bidirectional synchronization. If the goal is to allow a user to work on multiple devices interchangeably, you do need bidirectional synchronization. But if the goal is only to allow quick failover if a device breaks, then backup and restore is a better approach.

3
  • Interesting to know that the server can have multiple versions of unison+OCaml installed. I'll investigate to see if it is manageable to go down this road, as this seems to be the only downside to this tried-and-tested solution
    – zenlord
    Commented Dec 18, 2021 at 6:26
  • The problem with unison is that it depends on some native ocaml functionality that the ocaml developers will break binary compatibility at any time, including minor bug fixes thus making it unreliable. I used unison for many years but gave up because syncing between different systems became impossible.
    – hlovdal
    Commented Jan 9, 2022 at 13:51
  • @hlovdal that's what I'm afraid of with unison - for these roaming profiles to work, you actually don't need bi-directionality per se - if you set up a system of push and pull, that should suffice for most/all $HOME synchronisation situations, I guess. I have edited my own answer and added a link to a set of scripts to accomplish this.
    – zenlord
    Commented Jan 9, 2022 at 21:18

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .