Freelancing9 minute read

Software Development Anywhere: My Distributed Remote Workplace

Working as a remote software development freelancer has many benefits, but setting up an effective distributed working environment can be a real challenge. In this article, Toptal Engineer Ivan Voras describes how he leverages SSH and a number of related technologies, along with the Raspberry Pi and ownCloud, to be able to work effectively from anywhere.


Toptalauthors are vetted experts in their fields and write on topics in which they have demonstrated experience. All of our content is peer reviewed and validated by Toptal experts in the same field.

Working as a remote software development freelancer has many benefits, but setting up an effective distributed working environment can be a real challenge. In this article, Toptal Engineer Ivan Voras describes how he leverages SSH and a number of related technologies, along with the Raspberry Pi and ownCloud, to be able to work effectively from anywhere.


Toptalauthors are vetted experts in their fields and write on topics in which they have demonstrated experience. All of our content is peer reviewed and validated by Toptal experts in the same field.
Ivan Voras, PhD
Verified Expert in Engineering
21 Years of Experience

Ivan’s 15+ years of back-end and blockchain architecture experience has seen everything from DBA ops to development of OS kernel modules (FreeBSD).

Share

Working as a remote freelancer has many benefits, but setting up an effective distributed working environment can be a real challenge. Of course, there are many approaches that one can take, and no single “best” way will suit everyone. Remote digital workplace organization is indeed a very personal thing, and what works well for one developer may not work well at all for someone else.

With that in mind, the setup I present here is simply what works well for me personally, especially on remote projects that involve both development and system administration. I do believe this approach has a number of advantages, but each reader should consider how to adapt this in a way that works best for them, based on a combination of their operational needs and personal preferences.

My approach is largely based on features offered by SSH and related tools on Linux. Note that users of MacOS and other Unix-like systems can take advantage of the described procedures as well, to the extent that their systems support the described tools.

My Distributed Remote Workplace

My Own Personal Mini-Server

An important first step in my setup is a Raspberry Pi 2-powered server in my own home, used to host everything from my source code repositories to demo sites.

Although I do travel, my apartment does serve as my remote “fixed base of operations” with decent Internet connectivity (100 Mbit/sec) and almost no additional latency. This means that, from my apartment, I am basically constrained only by the destination network’s speed. The setup I’m describing works best with this type of connectivity, though it’s not a requirement. In fact, I have also used this approach while I had a relatively low-bandwidth ADSL connection, with most things working just fine. The only real requirement, in my experience, is that the bandwidth either be unmetered or dirt cheap.

As a residential user, I have the cheapest home network router my ISP could buy, which simply isn’t enough for what I need to do. I have therefore requested that the ISP put the router into “bridge mode”, where it only serves as a connection terminator, offering a PPPoE end-point to exactly one connected system. This means the device stops working as a WiFi access point or even as a common home router. All these tasks are handled by a professional small Mikrotik router RB951G-2HnD. It performs the NAT service for my local network (which I’ve numbered 10.10.10.0/24), and offers DHCP to wired and wireless devices connected to it. The Mikrotik and the Raspberry Pi have static addresses because they are used in contexts where a well-known address is required. In my case, those are 10.10.10.1 and 10.10.10.10, respectively.

My home connection doesn’t have a static IP address. For my purposes, this is only a mild inconvenience working remotely since the goal is to create a personal or SOHO working environment, not a 24/7 site. (For those who do require a static IP address for their server, it is worth noting that the cost of static IP addresses has continued to come down and fairly inexpensive static VPN IP options are available.) The DNS broker I use, Joker.com, offers a free dynamic DNS service alongside all of its other services, so one subdomain of my personal domain exists as a dynamic name. I use this name for connecting from the outside to my own network, and the Mikrotik is configured to pass SSH and HTTP through the NAT to the Raspberry Pi. I simply need to type the equivalent of ssh mydomain.example.com in order log in to my personal home server.

Data Anywhere

One significant thing which the Raspberry Pi does not offer is redundancy. I’ve equipped it with a 32 GB card, and that’s still a lot of data to lose in case something happens. To get around that, and to ensure access to my data if the residential Internet access hiccups, I mirror all my data to an external, cloud-like server. Since I’m in Europe, it made sense for me to get the smallest dedicated bare-metal (i.e. unvirtualized) server from Online.net, which comes with a low-end VIA CPU, offering 2 GB RAM and a 500 GB SSHD. As with the Raspberry Pi mini-server, I don’t need high CPU performance or even memory, so this is a perfect match. (As an aside, I remember my first “big” server which had two Pentium 3 CPUs and 1 GB of RAM, and was probably half the speed of the Raspberry Pi 2, and how we did great things with it, which has influenced my interest in optimization.)

I back up my Raspberry Pi to the remote cloud-like server using rdiff-backup. Judging from the relative sizes of the systems, these backups will get me virtually unlimited history. One other thing I have on the cloud-like server is an installation of ownCloud, which enables me to run a private Dropbox-like service. ownCloud as a product is moving toward groupware and collaboration, so it becomes even more useful if more people are using it. Since I started using it, I literally don’t have any local data that is not backed up to either the Raspberry Pi or to the cloud-like server, and most of it is backed up twice. Any additional backup redundancy you can make is always a good thing, if you value your data.

The “Magic” of SSHFS

Most of my work these days involves developing stuff which is not directly web-related (shocking, I know!), so my workflow often follows a classic edit-compile-run cycle. Depending on the specific circumstances of a project, I may either have its files locally on my laptop, I may put them in the ownCloud-synced directory, or, more interestingly, I might place them directly on the Raspberry Pi and use them from there.

The latter option is made possible thanks to SSHFS, which enables me to mount a remote directory from the Raspberry Pi locally. This is almost like a small piece of magic: you can have a remote directory on any server you have SSH access to (working under the permissions your user has on the server) mounted as a local directory.

Have a remote project directory? Mount it locally and go for it. If you need a powerful server for development or testing and – for some reason just going there and using vim in the console is not an option – mount that server locally and do whatever you want. This works especially well when I’m on a low-bandwidth connection to the Internet: even if I do work in a console text editor, the experience is much better if I run that editor locally and then just transfer the files via SSHFS, rather than working over a remote SSH session.

Need to compare several /etc directories on different remote servers? No problem. Just use SSHFS to mount each of them locally and then use diff (or whatever other tool is applicable) to compare them.

Or perhaps you need to process large log files but you don’t want to install the log parsing tool on the server (because it has a gazillion dependencies) and for whatever reason copying the logs is inconvenient. Once again, not a problem. Just mount the remote log directory locally via SSHFS and run whatever tool you need – even if it’s a huge, heavy, and GUI-driven. SSH supports on-the-fly compression and SSHFS makes use of it, so working with text files is fairly bandwidth-friendly.

For my purposes, I use the following options on the sshfs command line:

sshfs -o reconnect -o idmap=user -o follow_symlinks -C server.example.com:. server

Here’s what these command line options do:

  • -o reconnect - Tells sshfs to reconnect the SSH end-point if it breaks. This is very important since by default, when the connection breaks, the mount point will either fail abruptly or simply hang (which I found to be more common). Really seems to me that this should be the default option.
  • -o idmap=user - Tells sshfs to map the remote user (i.e., the user we are connecting as) to be the same as the local user. Since you could connect over SSH with an arbitrary username, this “fixes” things so the local system thinks the user is the same. Access rights and permissions on the remote system apply as usual for the remote user.
  • -o follow_symlinks - While you can have an arbitrary number of mounted remote file systems, I find it more convenient to mount just one remote directory, my home directory, and in it (in the remote SSH session) I can create symlinks to important directories elsewhere on the remote system, like /srv or /etc or /var/log. This option makes sshfs resolve remote symlinks into files and directories, allowing you to follow through to the linked directories.
  • -C - Turns on SSH compression. This is especially effective with file metadata and text files, so it’s another thing that seems like it should be a default option.
  • server.example.com:. - This is the remote end-point. The first part (server.example.com in this example) is the host name, and the second part (after the colon) is the remote directory to mount. In this case, I’ve added “.” to indicate the default directory where my user ends up after the SSH login, which is my home directory.
  • server - The local directory into which the remote file system will be mounted.

Especially if you are on a low-bandwidth or an unstable Internet connection, you need to use SSHFS with SSH public/private key authentication, and a local SSH agent. This way, you will not be prompted for passwords (either system passwords or SSH key passwords) when using SSHFS and the reconnect feature will work as advertised. Note that if you don’t have the SSH agent set up so it provides the unlocked key as needed within your session, the reconnect feature will usually fail. The web is full of SSH key tutorials, and most of the GTK-based desktop environments I’ve tried start their own agent (or “wallet”, or whatever they choose to call it) automatically.

Some Advanced SSH Tricks

Having a fixed point on the Internet which is remotely accessible from anywhere in the world, and which is on a high bandwidth connection – for me it’s my Raspberry Pi system, and it really could be any generic VPS – reduces stress and allows you to do all sorts of things with exchanging and tunneling data.

Need a quick nmap and you’re connected over a mobile phone network? Just do it from that server. Need to quickly copy some data around and SSHFS is an overkill? Just use plain SCP.

Another situation you may find yourself faced with us where you have SSH access to a server but its port 80 (or any other) is firewalled out to the outside network from which you connect. To get around this, you can use SSH to forward this port to your local machine, and then access it through localhost. An even more interesting approach is to use the host to which you are connected over SSH to forward a port on another machine, possibly behind the same firewall. If, for example, you have the following hosts:

  • 192.168.77.15 - A host in the remote local network behind a firewall, to which you need to connect to its port 80
  • foo.example.com - A host you have SSH access to, which can connect to the above host
  • your local system, localhost

A command to forward the port 80 on 192.168.77.15 to localhost:8080 via the foo.example.com SSH server would be:

ssh -L 8080:192.168.77.15:80 -C foo.example.com

The argument to -L specifies the local port, and the destination address and port. The -C argument enables compression, so you again can achieve bandwidth savings, and finally at the end you simply type the SSH host name. This command will open a plain SSH shell session to the host, and in addition to that, listen on localhost port 8080, to which you can connect.

One of the most impressive tricks SSH has developed in recent years is its ability to create real VPN tunnels. These manifest themselves as virtual network devices on both sides of the connection (assuming they have appropriate IP addresses set up) and can allow you access to the remote network as if you were physically there (bypassing firewalls). For both technical and security reasons, this requires root access on both machines which are being connected with the tunnel, so it’s much less convenient than just using port forwarding or SSHFS or SCP. This one is for the advanced users out there, who can readily find tutorials on how to do it.

Remote Office Anywhere

You can continue to work even while you wait for your car at the mechanic.

You can continue to work even while you wait for your car at the mechanic.

Stripped of the need to work from a single location, you can work literally from anywhere that has half-decent Internet connectivity using the technologies and techniques I’ve outlined (including while waiting for your car at the mechanic). Mount foreign systems over SSH, forward ports, drill tunnels, to access your private server or cloud-based data remotely, while overlooking a sun-bathed beach, or drinking hipster-grade eco-friendly coffee in a foggy city. Just do it!

Hire a Toptal expert on this topic.
Hire Now
Ivan Voras, PhD

Ivan Voras, PhD

Verified Expert in Engineering
21 Years of Experience

Zagreb, Croatia

Member since August 26, 2014

About the author

Ivan’s 15+ years of back-end and blockchain architecture experience has seen everything from DBA ops to development of OS kernel modules (FreeBSD).

authors are vetted experts in their fields and write on topics in which they have demonstrated experience. All of our content is peer reviewed and validated by Toptal experts in the same field.

World-class articles, delivered weekly.

Subscription implies consent to our privacy policy

World-class articles, delivered weekly.

Subscription implies consent to our privacy policy

Join the Toptal® community.