Working as a remote freelancer has many benefits, but setting up an effective distributed working environment can be a real challenge. Of course, there are many approaches that one can take, and no single “best” way will suit everyone. Remote digital workplace organization is indeed a very personal thing, and what works well for one developer may not work well at all for someone else.

With that in mind, the setup I present here is simply what works well for me personally, especially on remote projects that involve both development and system administration. I do believe this approach has a number of advantages, but each reader should consider how to adapt this in a way that works best for them, based on a combination of their operational needs and personal preferences.

My approach is largely based on features offered by SSH and related tools on Linux. Note that users of MacOS and other Unix-like systems can take advantage of the described procedures as well, to the extent that their systems support the described tools.

My Distributed Remote Workplace

My Own Personal Mini-Server

An important first step in my setup is a Raspberry Pi 2-powered server in my own home, used to host everything from my source code repositories to demo sites.

Although I do travel, my apartment does serve as my remote “fixed base of operations” with decent Internet connectivity (100 Mbit/sec) and almost no additional latency. This means that, from my apartment, I am basically constrained only by the destination network’s speed. The setup I’m describing works best with this type of connectivity, though it’s not a requirement. In fact, I have also used this approach while I had a relatively low-bandwidth ADSL connection, with most things working just fine. The only real requirement, in my experience, is that the bandwidth either be unmetered or dirt cheap.

As a residential user, I have the cheapest home network router my ISP could buy, which simply isn’t enough for what I need to do. I have therefore requested that the ISP put the router into “bridge mode”, where it only serves as a connection terminator, offering a PPPoE end-point to exactly one connected system. This means the device stops working as a WiFi access point or even as a common home router. All these tasks are handled by a professional small Mikrotik router RB951G-2HnD. It performs the NAT service for my local network (which I’ve numbered, and offers DHCP to wired and wireless devices connected to it. The Mikrotik and the Raspberry Pi have static addresses because they are used in contexts where a well-known address is required. In my case, those are and, respectively.

My home connection doesn’t have a static IP address. For my purposes, this is only a mild inconvenience working remotely since the goal is to create a personal or SOHO working environment, not a 24/7 site. (For those who do require a static IP address for their server, it is worth noting that the cost of static IP addresses has continued to come down and fairly inexpensive static VPN IP options are available.) The DNS broker I use,, offers a free dynamic DNS service alongside all of its other services, so one subdomain of my personal domain exists as a dynamic name. I use this name for connecting from the outside to my own network, and the Mikrotik is configured to pass SSH and HTTP through the NAT to the Raspberry Pi. I simply need to type the equivalent of ssh in order log in to my personal home server.

Data Anywhere

One significant thing which the Raspberry Pi does not offer is redundancy. I’ve equipped it with a 32 GB card, and that’s still a lot of data to lose in case something happens. To get around that, and to ensure access to my data if the residential Internet access hiccups, I mirror all my data to an external, cloud-like server. Since I’m in Europe, it made sense for me to get the smallest dedicated bare-metal (i.e. unvirtualized) server from, which comes with a low-end VIA CPU, offering 2 GB RAM and a 500 GB SSHD. As with the Raspberry Pi mini-server, I don’t need high CPU performance or even memory, so this is a perfect match. (As an aside, I remember my first “big” server which had two Pentium 3 CPUs and 1 GB of RAM, and was probably half the speed of the Raspberry Pi 2, and how we did great things with it, which has influenced my interest in optimization.)

I back up my Raspberry Pi to the remote cloud-like server using rdiff-backup. Judging from the relative sizes of the systems, these backups will get me virtually unlimited history. One other thing I have on the cloud-like server is an installation of ownCloud, which enables me to run a private Dropbox-like service. ownCloud as a product is moving toward groupware and collaboration, so it becomes even more useful if more people are using it. Since I started using it, I literally don’t have any local data that is not backed up to either the Raspberry Pi or to the cloud-like server, and most of it is backed up twice. Any additional backup redundancy you can make is always a good thing, if you value your data.

The “Magic” of SSHFS

Most of my work these days involves developing stuff which is not directly web-related (shocking, I know!), so my workflow often follows a classic edit-compile-run cycle. Depending on the specific circumstances of a project, I may either have its files locally on my laptop, I may put them in the ownCloud-synced directory, or, more interestingly, I might place them directly on the Raspberry Pi and use them from there.

The latter option is made possible thanks to SSHFS, which enables me to mount a remote directory from the Raspberry Pi locally. This is almost like a small piece of magic: you can have a remote directory on any server you have SSH access to (working under the permissions your user has on the server) mounted as a local directory.

Have a remote project directory? Mount it locally and go for it. If you need a powerful server for development or testing and – for some reason just going there and using vim in the console is not an option – mount that server locally and do whatever you want. This works especially well when I’m on a low-bandwidth connection to the Internet: even if I do work in a console text editor, the experience is much better if I run that editor locally and then just transfer the files via SSHFS, rather than working over a remote SSH session.

Need to compare several /etc directories on different remote servers? No problem. Just use SSHFS to mount each of them locally and then use diff (or whatever other tool is applicable) to compare them.

Or perhaps you need to process large log files but you don’t want to install the log parsing tool on the server (because it has a gazillion dependencies) and for whatever reason copying the logs is inconvenient. Once again, not a problem. Just mount the remote log directory locally via SSHFS and run whatever tool you need – even if it’s a huge, heavy, and GUI-driven. SSH supports on-the-fly compression and SSHFS makes use of it, so working with text files is fairly bandwidth-friendly.

For my purposes, I use the following options on the sshfs command line:

sshfs -o reconnect -o idmap=user -o follow_symlinks -C server

Here’s what these command line options do:

  • -o reconnect - Tells sshfs to reconnect the SSH end-point if it breaks. This is very important since by default, when the connection breaks, the mount point will either fail abruptly or simply hang (which I found to be more common). Really seems to me that this should be the default option.
  • -o idmap=user - Tells sshfs to map the remote user (i.e., the user we are connecting as) to be the same as the local user. Since you could connect over SSH with an arbitrary username, this “fixes” things so the local system thinks the user is the same. Access rights and permissions on the remote system apply as usual for the remote user.
  • -o follow_symlinks - While you can have an arbitrary number of mounted remote file systems, I find it more convenient to mount just one remote directory, my home directory, and in it (in the remote SSH session) I can create symlinks to important directories elsewhere on the remote system, like /srv or /etc or /var/log. This option makes sshfs resolve remote symlinks into files and directories, allowing you to follow through to the linked directories.
  • -C - Turns on SSH compression. This is especially effective with file metadata and text files, so it’s another thing that seems like it should be a default option.
  • - This is the remote end-point. The first part ( in this example) is the host name, and the second part (after the colon) is the remote directory to mount. In this case, I’ve added “.” to indicate the default directory where my user ends up after the SSH login, which is my home directory.
  • server - The local directory into which the remote file system will be mounted.

Especially if you are on a low-bandwidth or an unstable Internet connection, you need to use SSHFS with SSH public/private key authentication, and a local SSH agent. This way, you will not be prompted for passwords (either system passwords or SSH key passwords) when using SSHFS and the reconnect feature will work as advertised. Note that if you don’t have the SSH agent set up so it provides the unlocked key as needed within your session, the reconnect feature will usually fail. The web is full of SSH key tutorials, and most of the GTK-based desktop environments I’ve tried start their own agent (or “wallet”, or whatever they choose to call it) automatically.

Some Advanced SSH Tricks

Having a fixed point on the Internet which is remotely accessible from anywhere in the world, and which is on a high bandwidth connection – for me it’s my Raspberry Pi system, and it really could be any generic VPS – reduces stress and allows you to do all sorts of things with exchanging and tunneling data.

Need a quick nmap and you’re connected over a mobile phone network? Just do it from that server. Need to quickly copy some data around and SSHFS is an overkill? Just use plain SCP.

Another situation you may find yourself faced with us where you have SSH access to a server but its port 80 (or any other) is firewalled out to the outside network from which you connect. To get around this, you can use SSH to forward this port to your local machine, and then access it through localhost. An even more interesting approach is to use the host to which you are connected over SSH to forward a port on another machine, possibly behind the same firewall. If, for example, you have the following hosts:

  • - A host in the remote local network behind a firewall, to which you need to connect to its port 80
  • - A host you have SSH access to, which can connect to the above host
  • your local system, localhost

A command to forward the port 80 on to localhost:8080 via the SSH server would be:

ssh -L 8080: -C

The argument to -L specifies the local port, and the destination address and port. The -C argument enables compression, so you again can achieve bandwidth savings, and finally at the end you simply type the SSH host name. This command will open a plain SSH shell session to the host, and in addition to that, listen on localhost port 8080, to which you can connect.

One of the most impressive tricks SSH has developed in recent years is its ability to create real VPN tunnels. These manifest themselves as virtual network devices on both sides of the connection (assuming they have appropriate IP addresses set up) and can allow you access to the remote network as if you were physically there (bypassing firewalls). For both technical and security reasons, this requires root access on both machines which are being connected with the tunnel, so it’s much less convenient than just using port forwarding or SSHFS or SCP. This one is for the advanced users out there, who can readily find tutorials on how to do it.

Remote Office Anywhere

You can continue to work even while you wait for your car at the mechanic.

You can continue to work even while you wait for your car at the mechanic.

Stripped of the need to work from a single location, you can work literally from anywhere that has half-decent Internet connectivity using the technologies and techniques I’ve outlined (including while waiting for your car at the mechanic). Mount foreign systems over SSH, forward ports, drill tunnels, to access your private server or cloud-based data remotely, while overlooking a sun-bathed beach, or drinking hipster-grade eco-friendly coffee in a foggy city. Just do it!

About the author

Ivan Voras, Croatia
member since June 18, 2014
Ivan is primarily a back-end developer with 10+ years of experience in architecting and implementing server-side solutions, including non-web-related distributed platforms such as Bitcoin, chat servers, and general client-server solutions. He has handled DBA operations, developed modules for PostgreSQL, operating system kernel modules (FreeBSD), and new algorithms. He is interested in general client-server problems and distributed apps. [click to continue...]
Hiring? Meet the Top 10 Remote Developers for Hire in October 2016


Julien Renaux
Nice tips! Thanks
I don't understand the practical use of raspberry pi when you have a dedicated server with a static IP address with 1Gbit connection and 500GB hard drive. Your setup is cool though. And thanks for making me aware of sshfs. I didn't even know I wanted it before I read your article. I have been using syncthing for something similar.
Kris Leech
Not sure if it is compatible with sshfs, but mosh is useful on unreliable (out of office) connections,
Honestly I tried writing a reply like 5 times. I can't suggest anything the author suggests, as he is basically just listing single points of failures. No thought of electricity, internet reliability, or even hardware failures. He is just patching one unreliable thing with the best thing out of all of them (leased server), and even in that case he chose a bare metal server, over a managed VPS with some kind of attached storage with RAID (SAN). As a redundancy. If he would chose it as a primary server, he would only have the SPOF issue of storage and 1 server. I will write a longer reply in the form of a blog post, as I think some education is in order. If you're depending on your work to bring you in money, you shouldn't play with toys like RPi and hosting your development environment at home, and at the same time advocating "remote" work. Just don't.
As promised, I posted my contrary opinion of how an actual remote work space should work. It would work for Ivan, it just requires a change of the mindset. In actual practice, the only thing that Ivan did wrong was the choice of hardware, and not considering single points of failures and who has to fix them, inadvertently ignoring that he has become the sysadmin and support line for his own work space. Such a position, even if informal, allows for remote work only in the most perfect conditions, where internet always works, where electricity is always provided, where hardware doesn't fail. Alas, life doesn't work that way, that's why redundancies and backups exist. My longer opinion and suggestions regarding a remote digital workspace:
In all honesty, it's currently mostly for bragging rights. A year ago or so I was experimenting with creating and running Web apps directly from the rpi. I eventually hosted the database of legislative documents of my country directly on a 512 MB model B+. I sort of have an interest in optimising for underpowered hardware. In this text, I mostly mentioned RPI to give an idea what's possible with it. The RPI v2 B, is even reasonably fast to compile Go on it. If you are less interested in gadgeteering, you can just go with "real" servers, no problem there :-)
Hi, everyone is of course free to pick and mix from the approaches I've described as they think is fit. For me, I created a setup which works for me gradually, I've used the RPI for experimenting with running web apps on them, and it's accreted different roles during its lifetime. Among other things, I've found that compiling Go apps on the RPI 2 is fast enough and that there actually is a PostgreSQL setup which works great. I've learned a lot on it. And yes, it's a SPOF if used alone, as is the remote server, as is my laptop and sometimes my mobile phone (used with the mobile ownCloud client). That's why running multiple tools is good.
Toby J
Enjoyed the post, thanks.. I've found that the Ubiquiti EdgeRouter Lite is also a great solution for a customizable router and AP. It allows configuration over SSH, has built-in versioning of configs, etc.
That's why having contingencies in place is good. What are yours? From the article I can't really see you working on this setup remotely. Even in a proper office there are on average between 1-2 outages per year, and there are people there which can resolve them. I remember the occasional developer calling somebody at home and giving instructions to reset some router, or laptop or PC or whatever - and many of those times not being able to resolve the issue hands off. I really do cringe when people run things they rely on in their homes. And not in a gleeful "oh you lost all your archives because you didn't have a backup" kind of way, but in a supportive "how about you run this things on our redundant infrastructure" or "can you use my cloud server instead, i'd hate for you to experience issues with [electricity, internet, hardware,...]". It sucks when that happens, but let's try to suggest a more viable option for remote work if that's the goal.
comments powered by Disqus
The #1 Blog for Engineers
Get the latest content first.
No spam. Just great engineering and design posts.
The #1 Blog for Engineers
Get the latest content first.
Thank you for subscribing!
You can edit your subscription preferences here.
Trending articles
Relevant technologies
About the author
Ivan Voras
C Developer
Ivan is primarily a back-end developer with 10+ years of experience in architecting and implementing server-side solutions, including non-web-related distributed platforms such as Bitcoin, chat servers, and general client-server solutions. He has handled DBA operations, developed modules for PostgreSQL, operating system kernel modules (FreeBSD), and new algorithms. He is interested in general client-server problems and distributed apps.