Linux Best Practices and Tips by Toptal Developers

Share

This resource contains a collection of Linux best practices and Linux tips provided by our Toptal network members.

This resource contains a collection of Linux best practices and Linux tips provided by our Toptal network members. As such, this page will be updated on a regular basis to include additional information and cover emerging Linux techniques. This is a community driven project, so you are encouraged to contribute as well, and we are counting on your feedback.

Linux is powerful, flexible, and can be adapted to a broad range of uses. While best practices for administrating Linux servers are not hard to find due the popularity of the operating system, there is always a need for up-to-date Linux advice, along with the best tips, from our experienced Toptal Linux administrators.

Check out the Toptal resource pages for additional information on Linux job description and Linux interview questions.

Which Server Linux Distribution Is Recommended for Back-end Developers?

We covered desktop Linux distributions, but what about server distributions? Desktop Linux distributions are focused on the GUI, desktop environments, and simplicity in order to attract as many new users to the platform. On the other hand, server Linux distributions are focused primary on stability and security. GUI is not an important factor, because often they are running in the “headless mode” (a server that has no monitor, keyboard or mouse), and users (developers) connect to the servers remotely via the terminal. Another reason is that GUI elements take up precious memory, and every bit of free memory is very valuable. Stability and security we don’t need to explain, everyone wants their application safe and available.

So, which server Linux distribution should you pick?

  • Debian is considered the most stable server OS by hardcode system administrators for its very stable release cycle and sturdy, robust base system setup. The install image is relatively small and can be customized to very specific needs. The software base is huge, with 56864 software packages as of this writing. There’s a caveat, though. These packages are shared with desktop versions of Debian. Many other distributions, both client and server are based on debian’s .deb packages.
  • Ubuntu Server is not bad either. It’s built completely on top of Debian and it’s 100% binary compatible with it, and Canonical (the company behind Ubuntu) is investing more to make Ubuntu reliable server software. There is arguably more help about it online, and it has more up-to-date packages which is a mixed blessing in a server environment, but its LTS (long-term-support) releases are very popular. Developers working on Ubuntu and Ubuntu-based desktop distributions tend to prefer it due to the same software package management system, apt.
  • RedHat Enterprise Linux or RHEL for short, is the other large stable server distribution, backed by RedHat. It is a commercial distribution, with base software available for free but paid support licenses. RedHat has many internal software tools and it’s working with several of the biggest enterprise software vendors, like Oracle, to make RedHat a perfect home for enterprise systems. Additionally, it’s at the heart of OpenShift, the RedHat platform-as-a-service initiative. RedHat Linux is popular with enterprise developers, as the support licenses can get a bit expensive for smaller projects. The software package system is based on rpm packages and yum update manager. It rivals Debian and Ubuntu for stability, longevity and software support.
  • CentOS is the “free” version of RHEL. It’s built almost entirely out of RHEL, stripped of Red Hat branding and based on the same package system and same packages. It’s popular among developers who prefer to work with RPM and possibly ones using Fedora as their desktop system of choice.
  • Scientific Linuxis a Linux release put together by Fermilab, CERN, and various other labs and universities around the world ready tuned for experimenters”. It’s a distribution focused more on computing and it’s suited for such purposes, and it’s based on RedHat/CentOS.
  • CoreOS is very popular as a lightweight OS to run software containers on. Unlike the other distributions listed here, CoreOS comes with no package manager: the developer is expected to provide all software dependencies as a part of a lightweight “container”, a self-contained package of software.

Contributors

Zlatko Duric

Freelance Linux Developer
Germany

Zlatko is an experienced JavaScript developer, working with Angular, React, Node.js, and other technologies. Backed by experience in the field of web applications, Zlatko is focused on the quick delivery of quality web projects. With a long track of working and leading successful web projects as well as coaching and training, Zlatko tries to remain on top of the technology, keeping in mind best practices for performance and maintainability.

Show More

Maksim Sipos

Freelance Linux Developer
United Kingdom

Max's academic background is in numerical computational physics (Ph.D.). He worked as a quant developer on Wall Street, and then as a data scientist consultant in finance and internet companies. Max writes full-stack, production-level, high-performance, distributed solutions for complex big- or small-data problems. He is an experienced programmer in C++ (C++11, Qt), Java, Python (NumPy, SciPy, Sklearn) and JavaScript (Node and front-end).

Show More

Get the latest developer updates from Toptal.

Subscription implies consent to our privacy policy

Which Desktop Linux Distribution Is Recommended for Developers?

Among developers, usually back-end developers who need to set-up their Ruby or NodeJS or any other language working environment, there is often a big dilemma: which Linux distribution should be used? Which Linux distribution is the best? Which Linux distribution is the easiest to setup? Which Linux distribution will run the best in a virtual environment, or on old hardware? All these questions are hard to answer, and whole series of articles could be written on the topic. The basic answer “pick the one you’re most comfortable using” does not apply when developers don’t have time nor resources to test all the different Linux distributions. Not to mention that whichever Linux distribution one recommends, there will be always two more that disagree. But our best Toptal Linux developers came up with a short list for people that are looking to pick the best, recommended by the best.

Before the list, we need to mention desktop environments. In desktop Linux distributions, the main differentiating factor beside setup complexity for new users are desktop environments, like Gnome, Unity, Cinnamon, and KDE. If you were thinking that recommending a Linux distribution is a subjective and controversial topic, discussing favorite Linux desktop environments is even harder, because it can easily attract a lot of flame wars and it’s hard to make a technical discussion about it. Nevertheless, desktop environments are often playing important roles in the final decision, and we had to mention them, but we won’t go into details. Take a look at included links to learn about their philosophy, see how each one looks like, and pick the one you like the most.

Here is the unranked list:

  • Ubuntu - Consensus is that the first and the most general pick is Ubuntu. It is the easiest, comes with support for most hardware right out of the box, and it is very friendly to people who are new to Linux. It’s only downside is that it’s pretty heavy, because of the all the stuff that comes with. It is also worth noting that their Unity GUI is causing a lot of controversies between hard core Linux users.
  • Xubuntu - If you are looking for simplicity, Xubuntu is the way to go. OS is lightweight, and it supports and runs well on the old hardware, but keep in mind that some things don’t work out of the box, especially if you have a very new computer with UHD (Ultra High Definition) display.
  • Kubuntu. Popular Ubuntu based distribution for people who like KDE desktop environment, which has its own philosophy different to other environments.
  • elementaryOS Ubuntu based, heavily influenced by OSX and Macintosh design. If you want Linux that seems like Macintosh, this is the distribution to go for. It has its own desktop “shell”, with very minimalistic and light-weight apps for daily, common usage.
  • Mint - A bit heavy, but not as much as Ubuntu, nice Cinnamon desktop environment for people who like more classical desktops (Cinnamon is based on Gnome 2.x). On the other hand, it has out-of-the-box multimedia support, and if something is missing, it’s Ubuntu and Debian compatible.
  • Fedora A user-friendly distribution from RedHat. It is recommended if you like the Gnome 3 user interface, which is arguably the “strangest” (it is not bad, just unusual). Fedora is also a good pick for developers that run RedHat Enterprise Linux or Centos on their servers because they share the same core, and yum and rpm knowledge transfers from RedHat Enterprise Linux and Centos. Out of the box, Fedora Linux distribution was always recognized as one supporting developers well, with a lot of IDEs and development build tools available in the scenarios.
  • Arch Linux Popular “rolling-release” distribution, with no base “release”, but always the latest stable versions of individual packages. Developers looking to always be on bleeding edge should look into this distribution.

In the end, avoid Debian and RedHat, especially if you are a new Linux user. They are more server-side oriented. However, while not very friendly to developers, they are still used by many professionals.

Contributors

Rogelio Nicolas Mengual

Freelance Linux Developer
Argentina

Rogelio is a versatile, positive, and self-motivated full-stack engineer with over twelve years of work experience in many programming languages, frameworks, and platforms. He enjoys taking on new challenges and constantly strives to learn new skills.

Show More

Zlatko Duric

Freelance Linux Developer
Germany

Zlatko is an experienced JavaScript developer, working with Angular, React, Node.js, and other technologies. Backed by experience in the field of web applications, Zlatko is focused on the quick delivery of quality web projects. With a long track of working and leading successful web projects as well as coaching and training, Zlatko tries to remain on top of the technology, keeping in mind best practices for performance and maintainability.

Show More

How to Add More Accountability to Command Line Work?

You’re not a machine. At least, not most of the time. Like any human, you might be fooled by your imperfection and commit what we call a human error.

The ensuing three short tips will help bring some peace of mind when you are not-so-sure about your guiltiness, or just let you know that you need to assume your error and think of a way to repair the damage.

Add a timestamp to your bash history

This small tip will allow you to look forward in your shell history, knowing when each of your commands was executed:

~$ export HISTTIMEFORMAT="%d/%m/%y %T "
~$ history
    1  05/05/16 18:07:03 clear
    2  05/05/16 18:07:04 cd
    3  05/05/16 18:07:05 ls -ltr
    4  05/05/16 18:07:07 clear
    5  05/05/16 18:07:12 touch file
    6  05/05/16 18:07:17 chmod +rw file
    7  05/05/16 18:07:22 chmod +x file
    8  05/05/16 18:07:28 cat > file
    9  05/05/16 18:07:34 git commit -a -m ‘cool feature'
   10  05/05/16 18:07:36 git push

If you want to make this permanent, add the following line to your .bashrc file.

export HISTTIMEFORMAT="%d/%m/%y %T "

Share your immediate bash history across multiple sessions

Nowadays, almost everyone uses more than one window or tab at the same time. Browser tabs, chats windows, and of course multiple console sessions. As you might have already noticed, your bash history is saved to the disk at the end of the session when you log out. This is perfectly fine most of the time, but for several reasons you might want to share that command history immediately with your other sessions.

To do that, you can just add this to your .bashrc file.

export HISTCONTROL=ignoredups:erasedups  
shopt -s histappend  
export PROMPT_COMMAND="${PROMPT_COMMAND:+$PROMPT_COMMAND$'\n'}history -a; history -c; history -r"  

Now, let me explain how it works:

  • The first statement will avoid duplicate entries in your command history.
  • The second statement appends the history to your history file, instead of rewriting it.
  • The third statement will set the PROMPT_COMMAND variable with the commands that will save the file, cleanup the in-memory command history stack, and reload the file from the disk again. The PROMPT_COMMAND variable contents are executed every time you execute a command in the shell session, or when you just press Enter, and then when the history stack is updated in the current session.

Check which files were modified by a command

There is an overall safety rule when working with the command line: if you don’t know what it does, don’t run it. However, sometimes you want to execute particular things, even your code in your test environment, to “see what happens”.

Well, take that mysterious command and run it like this:

D="$(date "+%F %T.%N")"; sleep 0.5; <command>; find . -newermt "$D"  

Where <command> is what you want to execute.

This example takes the current time and saves its value to the variable D, then waits for a half second and executes the command. Finally, search for any new file created since that moment in the current directory. It’s not perfect, as another process might have modified the files during that period, but it will give you an excellent idea about which files you need to inspect and look for changes.

There are a few notes about the sleep command. Besides the fact that the date command is taking the time in nanoseconds resolution, your machine might be fast, and some of the files might not be caught by find. It’s always better to inspect false-positive results than miss some real occurrences. This time, I’m assuming that you’re using GNU sleep, which currently supports resolutions under 1 second. Otherwise, just use 1 keeping in mind that by extending the time to be evaluated by find, you might see some additional false-positive results.

Now, check this example:

# D="$(date "+%F %T.%N")"; sleep 0.5; ./mysterious ; find /etc -newermt "$D";
Dont't worry, I'm good
/etc/passwd

The mysterious command, despite the fact it prints “Don’t worry, I’m good”, modified your /etc/passwd file and you just found it by executing it in this way.

Contributors

Enrique Conci

Freelance Linux Developer
Argentina

Enrique is a Senior Unix System Administrator with extensive experience in Linux, AIX, HP-UX and Solaris. He specializes in task automation and deployment of large applications. He has over 5 years of experience in leading system administration teams and about 12 years working on Unix-based systems.

Show More

How to Search and Replace a String in All the File Names?

As a developer, you need to rename a file or more files in a directory from time to time, and replace part of the file name with a particular string. This can be achieved with the following single line in the Linux shell:

for file in *oldname*; do mv $file $(echo $file| sed -e "s/oldname/newname/g"); done

In the snippet above, oldname is the search string we need to replace, and newname is the new replacement string. The for loop iterates over all the files in the current directory and fills the file variable with every instance of the found files. The sed command replaces the found matching string with the new replacement string, and passes this as the second parameter to the mv command. Note that this snippet will not do string replacement recursively, meaning it will not walk into subfolders.

Here is an example that renames all files having string oldname in their filename and replaces them with a newname:

$ touch oldname.txt oldname.c oldname-test.c test-oldname.c
$ ls
oldname.c  oldname-test.c  oldname.txt  test-oldname.c
$ for file in *oldname*; do mv $file $(echo $file| sed -e "s/oldname/newname/g"); done
$ ls
newname.c  newname-test.c  newname.txt  test-newname.c

If we want to do the same thing, but this time including the files in all the subfolders (nested or not), one way to do it in Bash (version 4+) would be like this:

$ shopt -s globstar
$ for file in ./**/*oldname*; do mv $file $(echo $file| sed -e "s/oldname/newname/g"); done

Enabling globstar and using the double star notation, we ask the shell to walk recursively through the folder tree to find all the matching files. The rest of the snippet remains the same.

Contributors

John Kapolos

Freelance Linux Developer
Greece

John has been freelancing since the early 2000s. He has been working full-time remotely for US/UK companies as an all-around developer. He specializes in SPA development with React and Ember.js. On the back-end, he loves working with Node and PHP. In his spare time, he plays with DevOps and maintains his personal distributed infrastructure for fun.

Show More

Yogindar Das Yasodhar

Technical Leader (Cisco Systems)

Fifteen years of software experience in all the layers of server management domain.

How to Quickly Get Through Bash History?

Let’s say some time ago you used a very long bash command that you would now like to reuse. However, you don’t want to scroll up by repeatedly clicking the arrow key to get to it.

There’s an easy trick to pull to access the Bash history, and it is called the reverse-i-search mode. This mode will search through the history reverse-chronologically so that your last used command that has your input keyword will come up first.

So, to quickly get through you Bash history, press CTRL+R on the keyboard to activate bash’s reverse-i-search mode, then type in part of the command you’re looking for, and it will show it up. When it does, just press Enter to execute it.

Ninja tip: While searching, try to type in one or more unique words from the previous long command that you are looking for, as this will help you find it more quickly.

Contributors

Khaled Monsoor

Sr. Software Engineer (R&D) (Freelancer)

Python and JavaScript software developer. Big-data enthusiast. Photography and coffee aficionado.

How to Analyze Log Files to Count the Number of Occurrences in a Time Span?

Often, when analyzing log files, it is useful to count the number of occurrences of a particular event during some time span. Let’s say, for example, that we want to count the number of “new sessions”.

First, let’s verify the log format by finding the last two lines that contain the text “New session”:

$ grep 'New session' test.log | tail -2
2016-06-30 15:50:56.985 [033] Info       Session              50: New session (total 50), call from "172.24.9.39".
2016-06-30 15:50:57.000 [033] Info       Session              51: New session (total 51), call from "172.24.9.39".

In the commands above, grep is used to filter all log lines and show only those that have the text “New session” (case sensitive). Then, tail -2 is used to show only the last two lines filtered by grep. The parameter -2 makes sure only the last two lines are shown, as the usual default of the tail command is to show the last ten lines. The results of the grep command are redirected to the tail command using the pipe |. This makes all lines filtered by grep serve as input to the tail command.

Now, let’s use the log lines we’ve extracted above to come up with a method to group the occurrences per hour. All log lines we obtain with grep 'New session' will have the following format:

2016-06-30 15:50:56.985 [033] Info       Session              50: New session (total 50), call from "172.24.9.39".

We can use a combination of cut and uniq to count all occurrences per hour in the lines filtered by grep. We’ll use cut to extract just the hour from the log lines passed by grep. The following comparison shows which characters we want from each log line:

-- Characters indexes ------
0        10        20
1234567890123456789012345
----------------------------
2016-06-30 15:50:56.985 [033] Info       Session              50: New session (total 50), call from "172.24.9.39".
-- The line above becomes --
15

We can therefore see that we want to extract just the 12th and 13th characters from each of the log lines, since that represents the hour of each entry. With this, we can then group the results and verify the count of events for each hour of the day, which is what we do in the next step.

To group and count the events per hour, we use uniq -c. Please note the uniq command depends on the sorted input to group correctly, meaning that there should never be hour 14 after hour 15, for example. This is rarely a problem with logs because the time should always increase as new lines are added.

Assembling the commands together and executing them, we get:

$ grep 'New session' aacserver20160630.log | cut -c12-13 | uniq -c
  48 00
  42 01
  42 02
  66 03
   1 04
   2 05
   2 06
  42 07
  46 08
  44 09
  42 10
  42 11
  42 12
  42 13
  42 14
  42 15
   8 16

Each line of the above output contains two numbers. The second represents the hour and the first represents the number of log entries (for new sessions) that hour. We can therefore quickly see from these results that something might have gone wrong between 4 and 7 AM, since there was such a drop in new sessions during those hours.

All this was easily accomplished with the combination of grep, cut and uniq commands.

Contributors

Guilherme Pim

Support Engineer (G&D)

Guilherme works daily with the troubleshooting of issues that may arise from software and systems.

How To Remove Color Codes From The Output Of A Command?

Many command line tools on Linux produce colored outputs. For example, the following command on a typical Linux and Git setup will produce lines of output where each will begin with a colored word:

$ git log --oneline --color
e785d0f Fix submit panel error
d83ff21 Implement remote belt
c0dfae9 Make login fields required
8f69a2a Initial commit

These colored outputs, when read by a human, are convenient, however, they make it difficult to produce complex commands involving pipes and command substitutions. When the following command is executed in a terminal within a Git repository, it will yield the very first commit hash for the repository:

$ git log --oneline --color | tail -n 1 | awk '{print $1}'
8f69a2a

Since the output is colored, the actual content of the output contains some hidden characters. The following command, when executed within a Git repository, will end with a somewhat cryptic error:

$ git show --pretty="format:" --name-only $(git log --oneline --color | tail -n 1 | awk '{print $1}')
bash: $'\E[33ma1d1e20\E[m': command not found
solver/solver.go

Solving this issue is simple, and requires a simple change to the command above. Pipe the colored output through sed, a Unix utility that parses and transforms text, to remove all invisible color codes:

sed -r "s/\x1B\[([0-9]{1,2}(;[0-9]{1,2})?)?[m|K]//g"
$ git show --pretty="format:" --name-only $(git log --oneline --color | \
		tail -n 1 | \
		awk '{print $1}' | \
		sed -r "s/\x1B\[([0-9]{1,2}(;[0-9]{1,2})?)?[m|K]//g")
README.md
app.yaml
index.html
script.js
solver/solver.go
style.css

Contributors

Kleber Virgilio Correia

Freelance Linux Developer
Brazil

Kleber is a software developer with ten years of experience working professionally in IT. He enjoys sharing and acquiring knowledge in a broad range of topics, including Unix, Agile software development, functional and object-oriented languages, design patterns, RESTful architecture, distributed applications, and cloud computing.

Show More

Gilberto T. Garcia Jr

Freelance Linux Developer
United States

Gilberto is a software engineer with expertise in scoping, architecting, developing, and maintaining web applications. His strengths include problem-solving, communicating effectively, and the ability to mentor teammates. He has exceptional technical knowledge spanning the full stack of technologies, including server side, client side, and Q&A. He has also written “Lift Application Development Cookbook,” published in 2013 by PacktPub.

Show More

How to Avoid Frustration After Forgetting To Use Sudo Command

Have you ever typed a command in your terminal, only to find out you forgot to prefix it with the sudo command? You have to retype the whole command again just to add the sudo in front of it. Frustrating!

Well, you can add this simple alias to your .bashrc to help you reduce the frustration:

alias argh='sudo $(history -p \!\!)'

The command is pretty simple; history is a program that keeps track of your terminal command history. You can execute any command in your history by using various methods described in the man history page.

In our example, we use history -p \!\!, which prints the most recent command in your history list. So, our alias argh executes the last used command in the history with sudo in front of it.

You can, of course, change the string argh with your most favorite word. Here is an example and the original source for the idea.

Contributors

Luqman Sungkar

Co-founder (Fliptech Lentera Inspirasi Pertiwi)

Luqman is the co-founder of flip.id, a startup in Indonesia. He is responsible for building and maintaining all the IT system behind the startup.

Submit a tip

Submitted questions and answers are subject to review and editing, and may or may not be selected for posting, at the sole discretion of Toptal, LLC.

* All fields are required

Toptal Connects the Top 3% of Freelance Talent All Over The World.

Join the Toptal community.