livialima

joined 2 years ago
MODERATOR OF
 

What is this madness – surely the course was for just 20 days?

Yes, but hopefully you’ll go on learning, so here’s a few suggestions for directions that you might take.

Play with your server

You’re familiar with the server you used during the course, so keep working with it. Maybe uninstall Apache2 and install NGINX, a competing webserver. Keep a running stat on ssh “attackers”. Whatever. A free AWS will last a year, and a $5/mo server should be something you can easily justify.

Add services that you’ll use

You should now be capable of following tutorials on installing and running your own instance of Minecraft, Wordpress, WireGuard VPN, or Mediawiki. Expect to have some problems – it's all good experience!

Take a look at Server World for some inspiration.

Extend your learning

Stop browsing articles on Gnome, KDE or i3 – and start checking out any articles like “20 Linux commands every sysadmin should know”. Try these out, delve into the options. Like learning a foreign vocabulary, you will only be able to use these “words” if you know them!

Check out Linux Journey if you haven't already, specially if you are still pretty new to Linux and would like to see a different learning approach. Linux 101 Hacks is also a good resource.

Practice what you've learned with some challenges at SadServers.com. There you'll find a collection of scenarios where you have to do, fix or hack something in a Linux server. It's great to exercise your troubleshooting skills without messing with your own server.

To get crazy fast in the command line, try Command Line Challenge, practicelinux.com, learnshell.org and commandlinefu.com.

If your next level goal is to get into DevOps, take a look at the DevOps Roadmap.

Certifications

If you’re looking to do Linux professionally, and you don’t have an impressive CV or resume already, then you should be aiming at getting a Linux certification. There are really just three certs/tracks that count:

  • CompTIA Linux+ - one and done exam, distro independent but doesn't hold much value in the market. Do this if you don't want to get too deep into Linux, or you have other CompTIA tracks going on and an employer is paying for them.
  • LPI LPIC-1: Linux Administrator – Very extensive description of the coverage of their various certs/courses. You can go very deep with this exams, they cover everything you can think of pure Linux. Not so popular with employers but the knowledge certainly holds it value.
  • Red Hat – You could spend a lot of time and money here, but it might well pay off! Geared to RedHat Enterprise Linux distribution and its particularities, it is a practical exam (the others are multiple question) and it's well known in Enterprise circles, it really pops up in any resume.

Even if you don’t want/need certs, the outline of the topics in these references can give you a good idea of areas to focus on in your self-learning.

Affordable professional training

Show your appreciation!

Steve Brorens (@snori74) was a collector of postcards and enjoyed greatly all the "Snail Mail" he received from the students.

But since his passing there's nowhere to send postcards anymore. You can show your appreciation for the course by letting everyone else know how awesome it was! Recommend the course to other people, invite your friends to do the challenge together, have fun! Show the world you finished the challenge by posting about it on social media.

Contribute

Livia Lima is the one currently maintaining the material. But she's only one person and appreciates any help to keep this challenge running consistently every month, and available to everyone.

If you'd like to contribute, here a few things you can do:

  • Answer other students questions in our channels. Help a friend through the challenge.
  • Correct typos, dead links, etc by submitting a correction request to the source material.
  • Suggest improvements by submitting a feature request to the source material.
  • Help moderate Lemmy, Reddit or Discord. Are you a whiz in one (or more) of those platforms? Help admin them.
  • Support the infrastructure by donating or sponsoring. The challenge is free but the website servers and the domains costs money, so we appreciate if you can spare a buck.

Thanks for all and happy linuxing!

 

INTRO

Today is the final session for the course. Pat yourself on the back if you worked your way through all lessons!

You’ve seen that a continual emphasis for a sysadmin is to automate as much as possible, and also how in Linux the system is very “transparent” - once you know where to look!

Today, on this final session for the course, we’ll cover how to write small programs or “shell scripts” to help manage your system.

When typing at the Linux command-line you're directly communicating with "the command interpreter", also known as "the shell". Normally this shell is bash, so when you string commands together to make a script the result can be called either a '"shell script", or a "bash script".

Why make a script rather than just typing commands in manually?

  • It saves typing. Remember when we searched through the logs with a long string of grep, cut and sort commands? If you need to do something like that more than a few times then turning it into a script saves typing - and typos!
  • Parameters. One script can be used to do several things depending on what parameters you provide
  • Automation. Pop your script in /etc/cron.daily and it will run each day, or install a symlink to it in the appropriate /etc/rc.d folder and you can have it run each time the system is shut down or booted up.

YOUR TASKS TODAY

  • Write a short script that list the top 3 IP addresses that tried to login into your server

START WITH A SHEBANG!

Scripts are just simple text files, but if you set the "execute" permissions on them then the system will look for a special line starting with the two characters “#” and “!” - referred to as the "shebang" (or "crunchbang") at the top of the file.

This line typically looks like this:

 #!/bin/bash

Normally anything starting with a "#" character would be treated as a comment, but in the first line and followed by a "!", it's interpreted as: "please feed the rest of this to the /bin/bash program, which will interpret it as a script". All of our scripts will be written in the bash language - the same as you’ve been typing at the command line throughout this course - but scripts can also be written in many other "scripting languages", so a script in the Perl language might start with #!/usr/bin/perl and one in Python #!/usr/bin/env python3

YOUR FIRST SCRIPT

You'll write a small script to list out who's been most recently unsuccessfully trying to login to your server, using the entries in /var/log/auth.log.

Use vim to create a file, attacker, in your home directory with this content:

 #!/bin/bash
 #
 #   attacker - prints out the last failed login attempt
 #
 echo "The last failed login attempt came from IP address:"
 grep -i "disconnected from" /var/log/auth.log|tail -1| cut -d: -f4| cut -f7 -d" "

Putting comments at the top of the script like this isn't strictly necessary (the computer ignores them), but it's a good professional habit to get into.

To make it executable type:

chmod +x attacker

Now to run this script, you just need to refer to it by name - but the current directory is (deliberately) not in your $PATH, so you need to do this either of two ways:

 /home/support/attacker
 ./attacker

Once you're happy with a script, and want to have it easily available, you'll probably want to move it somewhere on your $PATH - and /usr/local/bin is a normally the appropriate place, so try this:

sudo mv attacker /usr/local/bin/attacker

...and now it will Just Work whenever you type attacker

EXTENDING THE SCRIPT

You can expand this script so that it requires a parameter and prints out some syntax help when you don't give one. There are a few new tricks in this, so it's worth studying:

 #
 ##   topattack - list the most persistent attackers
 #
 if [ -z "$1" ]; then
 echo -e "\nUsage: `basename $0` <num> - Lists the top <num> attackers by IP"
 exit 0
 fi
 echo " "
 echo "Persistant recent attackers"
 echo " "
 echo "Attempts      IP "
 echo "-----------------------"
 grep "Disconnected from authenticating user root" /var/log/auth.log|cut -d: -f 4 | cut -d" " -f7|sort |uniq -c |sort -nr |head -$1

Again, use vim to create "topattack", chmod to make it executable and mv to move it into /usr/local/bin once you have it working correctly.

(BTW, you can use whois to find details on any of these IPs - just be aware that the system that is "attacking" you may be an innocent party that's been hacked into).

A collection of simple scripts like this is something that you can easily create to make your sysadmin tasks simpler, quicker and less error prone.

If automating and scripting many of your daily tasks sounds like something you really like doing, you might also want to script the setup of your machines and services. Even though you can do this using bash scripting like shown in this lesson, there are some benefits in choosing an orchestration framework like ansible, cloudinit or terraform. Those frameworks are outside of the scope of this course, but might be worth reading about.

And yes, this is the last lesson - so please, feel free to write a review on how the course went for you and what you plan to do with your new knowledge and skills!

RESOURCES

Some rights reserved. Check the license terms here

 

INTRO

Today's topic gives a peek “under the covers” at the technical detail of how files are stored.

Linux supports a large number of different “filesystems” - although on a server you’ll typically be dealing with just ext3 or ext4 and perhaps btrfs - but today we’ll not be dealing with any of these; instead with the layer of Linux that sits above all of these - the Linux Virtual Filesystem.

The VFS is a key part of Linux, and an overview of it and some of the surrounding concepts is very useful in confidently administering a system.

THE NEXT LAYER DOWN

Linux has an extra layer between the filename and the file's actual data on the disk - this is the inode. This has a numerical value which you can see most easily in two ways:

The -i switch on the ls command:

 ls -li /etc/hosts
 35356766 -rw------- 1 root root 260 Nov 25 04:59 /etc/hosts

The stat command:

 stat /etc/hosts
 File: `/etc/hosts'
 Size: 260           Blocks: 8           IO Block: 4096   regular file
 Device: 2ch/44d     Inode: 35356766     Links: 1
 Access: (0600/-rw-------)  Uid: (  0/   root)   Gid: (	0/	root)
 Access: 2012-11-28 13:09:10.000000000 +0400
 Modify: 2012-11-25 04:59:55.000000000 +0400
 Change: 2012-11-25 04:59:55.000000000 +0400

Every file name "points" to an inode, which in turn points to the actual data on the disk. This means that several filenames could point to the same inode - and hence have exactly the same contents. In fact this is a standard technique - called a "hard link". The other important thing to note is that when we view the permissions, ownership and dates of filenames, these attributes are actually kept at the inode level, not the filename. Much of the time this distinction is just theoretical, but it can be very important.

TWO SORTS OF LINKS

Work through the steps below to get familiar with hard and soft linking:

First move to your home directory with:

cd

Then use the ln ("link") command to create a “hard link”, like this:

ln /etc/passwd link1

and now a "symbolic link" (or “symlink”), like this:

ln -s /etc/passwd link2

Now use ls -li to view the resulting files, and less or cat to view them.

Note that the permissions on a symlink generally show as allowing everthing - but what matters is the permission of the file it points to.

Both hard and symlinks are widely used in Linux, but symlinks are especially common - for example:

ls -ltr /etc/rc2.d/*

This directory holds all the scripts that start when your machine changes to “runlevel 2” (its normal running state) - but you'll see that in fact most of them are symlinks to the real scripts in /etc/init.d

It's also very common to have something like :

 prog
 prog-v3
 prog-v4

where the program "prog", is a symlink - originally to v3, but now points to v4 (and could be pointed back if required)

Read up in the resources provided, and test on your server to gain a better understanding. In particular, see how permissions and file sizes work with symbolic links versus hard links or simple files

The Differences

Hard links:

  • Only link to a file, not a directory
  • Can't reference a file on a different disk/volume
  • Links will reference a file even if it is moved
  • Links reference inode/physical locations on the disk

Symbolic (soft) links:

  • Can link to directories
  • Can reference a file/folder on a different hard disk/volume
  • Links remain if the original file is deleted
  • Links will NOT reference the file anymore if it is moved
  • Links reference abstract filenames/directories and NOT physical locations.
  • They have their own inode

EXTENSION

RESOURCES

Some rights reserved. Check the license terms here

 

INTRO

When you’re administering a remote server, logs are your best friend, but disk space problems can be your worst enemy - so while Linux applications are generally very good at generating logs, they need to be controlled.

The logrotate application keeps your logs in check. Using this, you can define how many days of logs you wish to keep; split them into manageable files; compress them to save space, or even keep them on a totally separate server.

Good sysadmins love automation - having the computer automatically do the boring repetitive stuff Just Makes Sense.

YOUR TASKS TODAY

  • Check the logs for apache2 that are Severity 3
  • Edit logrotate configuration for apache2 to rotate daily

ARE YOUR LOGS ROTATING?

Look into your logs directories - /var/log, and subdirectories like /var/log/apache2. Can you see that your logs are already being rotated? You should see a /var/log/syslog file, but also a series of older compressed versions with names like /var/log/syslog.1.gz

WHEN DO THEY ROTATE?

You will recall that cron is generally setup to run scripts in /etc/cron.daily - so look in there and you should see a script called logrotate - or possibly 00logrotate to force it to be the first task to run.

CONFIGURING LOGROTATE

The overall configuration is set in /etc/logrotate.conf - have a look at that, but then also look at the files under the directory /etc/logrotate.d, as the contents of these are merged in to create the full configuration. You will probably see one called apache2, with contents like this:

 /var/log/apache2/*.log {
 weekly
 missingok
 rotate 52
 compress
 delaycompress
 notifempty
 create 640 root adm
 }

Much of this is fairly clear: any apache2 .log file will be rotated each week, with 52 compressed copies being kept.

Typically when you install an application a suitable logrotate “recipe” is installed for you, so you’ll not normally be creating these from scratch. However, the default settings won’t always match your requirements, so it’s perfectly reasonable for you as the sysadmin to edit these - for example, the default apache2 recipe above creates 52 weekly logs, but you might find it more useful to have logs rotated daily, a copy automatically emailed to an auditor, and just 30 days worth kept on the server.

RESOURCES

TROUBLESHOOT AND MAKE A SAD SERVER HAPPY!

Practice what you've learned with some challenges at SadServers.com:

Some rights reserved. Check the license terms here

 

INTRO

A few days ago we saw how to authorise extra repositories for apt-cache to search when we need unusual applications, or perhaps more recent versions than those in the standard repositories.

Today we're going one step further - literally going to "go to the source". This is not something to be done lightly - the whole reason for package managers is to make your life easy - but occasionally it is justified, and it is something you need to be aware of and comfortable with.

The applications we've been installing up to this point have come from repositories. The files there are "binaries" - pre-compiled, and often customised by your distro. What might not be clear is that your distro gets these applications from a diverse range of un-coordinated development projects (the "upstream"), and these developers are continuously working on new versions. We’ll go to one of these, download the source, compile and install it.

(Another big part of what package managers like apt do, is to identify and install any required "dependencies". In the Linux world many open source apps take advantage of existing infrastructure in this way, but it can be a very tricky thing to resolve manually. However, the app we're installing today from source is relatively unusual in being completly standalone).

FIRST WE NEED THE ESSENTIALS

Projects normally provide their applications as "source files", written in the C, C++ or other computer languages. We're going to pull down such a source file, but it won't be any use to us until we compile it into an "executable" - a program that our server can execute. So, we'll need to first install a standard bundle of common compilers and similar tools. On Ubuntu, the package of such tools is called “build-essential". Install it like this:

sudo apt install build-essential

GETTING THE SOURCE

First, test that you already have nmap installed, and type nmap -V to see what version you have. This is the version installed from your standard repositories. Next, type: which nmap - to see where the executable is stored.

Now let’s go to the "Project Page" for the developers http://nmap.org/ and grab the very latest cutting-edge version. Look for the download page, then the section “Source Code Distribution” and the link for the "Latest development nmap release tarball" and note the URL for it - something like:

 https://nmap.org/dist/nmap-7.70.tar.bz2

This is version 7.70, the latest development release when these notes were written, but it may be different now. So now we'll pull this down to your server. The first question is where to put it - we'll put it in your home directory, so change to your home directory with:

cd

then simply using wget ("web get"), to download the file like this:

wget -v https://nmap.org/dist/nmap-7.70.tar.bz2

The -v (for verbose), gives some feedback so that you can see what's happening. Once it's finished, check by listing your directory contents:

ls -ltr

As we’ve learnt, the end of the filename is typically a clue to the file’s format - in this case ".bz2" signals that it's a tarball compressed with the bz2 algorithm. While we could uncompress this then un-combine the files in two steps, it can be done with one command - like this:

tar -j -x -v -f nmap-7.70.tar.bz2

....where the -j means "uncompress a bz2 file first", -x is extract, -v is verbose - and -f says "the filename comes next". Normally we'd actually do this more concisely as:

tar -jxvf nmap-7.70.tar.bz2

So, lets see the results,

ls -ltr

Remembering that directories have a leading "d" in the listing, you'll see that a directory has been created :

 -rw-r--r--  1 steve  steve  21633731    2011-10-01 06:46 nmap-7.70.tar.bz2
 drwxr-xr-x 20 steve  steve  4096        2011-10-01 06:06 nmap-7.70

Now explore the contents of this with mc or simply cd nmap-7.70 - you should be able to use ls and less find and read the actual source code. Even if you know no programming, the comments can be entertaining reading.

By convention, source files will typically include in their root directory a series of text files in uppercase such as: README and INSTALLATION. Look for these, and read them using more or less. It's important to realise that the programmers of the "upstream" project are not writing for Ubuntu, CentOS - or even Linux. They have written a correct working program in C or C++ etc and made it available, but it's up to us to figure out how to compile it for our operating system, chip type etc. (This hopefully gives a little insight into the value that distributions such as CentOS, Ubuntu and utilities such as apt, yum etc add, and how tough it would be to create your own Linux From Scratch)

So, in this case we see an INSTALL file that says something terse like:

 Ideally, you should be able to just type:

 ./configure
 make
 make install

 For far more in-depth compilation, installation, and removal notes
 read the Nmap Install Guide at http://nmap.org/install/ .

In fact, this is fairly standard for many packages. Here's what each of the steps does:

  • ./configure - is a script which checks your server (ie to see whether it's ARM or Intel based, 32 or 64-bit, which compiler you have etc). It can also be given parameters to tailor the compilation of the software, such as to not include any extra support for running in a GUI environment - something that would make sense on a "headless" (remote text-only server), or to optimize for minimum memory use at the expense of speed - as might make sense if your server has very little RAM. If asked any questions, just take the defaults - and don't panic if you get some WARNING messages, chances are that all will be well.
  • make - compiles the software, typically calling the GNU compiler gcc. This may generate lots of scary looking text, and take a minute or two - or as much as an hour or two for very large packages like LibreOffice.
  • make install - this step takes the compiled files, and installs that plus documentation to your system and in some cases will setup services and scheduled tasks etc. Until now you've just been working in your home directory, but this step installs to the system for all users, so requires root privileges. Because of this, you'll need to actually run: sudo make install. If asked any questions, just take the defaults.

Now, potentially this last step will have overwritten the nmap you already had, but more likely this new one has been installed into a different place.

In general /bin is for key parts of the operating system, /usr/bin for less critical utilities and /usr/local/bin for software you've chosed to manually install yourself. When you type a command it will search through each of the directories given in your PATH environment variable, and start the first match. So, if /bin/nmap exists, it will run instead of /usr/local/bin - but if you give the "full path" to the version you want - such as /usr/local/bin/nmap - it will run that version instead.

The “locate” command allows very fast searching for files, but because these files have only just been added, we'll need to manually update the index of files:

sudo updatedb

Then to search the index:

locate bin/nmap

This should find both your old and copies of nmap

Now try running each, for example:

/usr/bin/nmap -V

/usr/local/bin/nmap -V

The nmap utility relies on no other package or library, so is very easy to install from source. Most other packages have many "dependencies", so installing them from source by hand can be pretty challenging even when well explained (look at: http://oss.oetiker.ch/smokeping/doc/smokeping_install.en.html for a good example).

NOTE: Because you've done all this outside of the apt system, this binary won't get updates when you run apt update. Not a big issue with a utility like nmap probably, but for anything that runs as an exposed service it's important that you understand that you now have to track security alerts for the application (and all of its dependencies), and install the later fixed versions when they're available. This is a significant pain/risk for a production server.

POSTING YOUR PROGRESS

Pat yourself on the back if you succeeded today - and let us know in the forum.

EXTENSION

Research some distributions where “from source” is normal:

None of these is typically used in production servers, but investigating any of them will certainly increase your knowledge of how Linux works "under the covers" - asking you to make many choices that the production-ready distros such as RHEL and Ubuntu do on your behalf by choosing what they see as sensible defaults.

RESOURCES

Some rights reserved. Check the license terms here

 

INTRO

As a system administrator, you need to be able to confidently work with compressed “archives” of files. In particular two of your key responsibilities; installing new software, and managing backups, often require this.

CREATING ARCHIVES

On other operating systems, applications like WinZip, and pkzip before it, have long been used to gather a series of files and folders into one compressed file - with a .zip extension. Linux takes a slightly different approach, with the "gathering" of files and folders done in one step, and the compression in another.

So, you could create a "snapshot" of the current files in your /etc/init.d folder like this:

tar -cvf myinits.tar /etc/init.d/

This creates myinits.tar in your current directory.

Note 1: The -v switch (verbose) is included to give some feedback - traditionally many utilities provide no feedback unless they fail. Note 2: The -f switch specifies that “the output should go to the filename which follows” - so in this case the order of the switches is important.

(The cryptic “tar” name? - originally short for "tape archive")

You could then compress this file with GnuZip like this:

gzip myinits.tar

...which will create myinits.tar.gz. A compressed tar archive like this is known as a "tarball". You will also sometimes see tarballs with a .tgz extension - at the Linux commandline this doesn't have any meaning to the system, but is simply helpful to humans.

In practice you can do the two steps in one with the "-z" switch, like this:

tar -cvzf myinits.tgz /etc/init.d/

This uses the -c switch to say that we're creating an archive; -v to make the command "verbose"; -z to compress the result - and -f to specify the output file.

TASKS FOR TODAY

  • Check the links under "Resources" to better understand this - and to find out how to extract files from an archive!
  • Use tar to create an archive copy of some files and check the resulting size
  • Run the same command, but this time use -z to compress - and check the file size
  • Copy your archives to /tmp (with: cp) and extract each there to test that it works

POSTING YOUR PROGRESS

Nothing to post today - but make sure you understand this stuff, because we'll be using it for real in the next day's session!

EXTENSION

  • What is a .bz2 file - and how would you extract the files from it?
  • Research how absolute and relative paths are handled in tar - and why you need to be careful extracting from archives when logged in as root
  • You might notice that some tutorials write "tar cvf" rather than "tar -cvf" with the switch character - do you know why?

RESOURCES

Some rights reserved. Check the license terms here

 

INTRO

Early on you installed some software packages to your server using apt install. That was fairly painless, and we explained how the Linux model of software installation is very similar to how "app stores" work on Android, iPhone, and increasingly in MacOS and Windows.

Today however, you'll be looking "under the covers" to see how this works; better understand the advantages (and disadvantages!) - and to see how you can safely extend the system beyond the main official sources.

REPOSITORIES AND VERSIONS

Any particular Linux installation has a number of important characteristics:

  • Version - e.g. Ubuntu 20.04, CentOS 5, RHEL 6
  • "Bit size" - 32-bit or 64-bit
  • Chip - Intel, AMD, PowerPC, ARM

The version number is particularly important because it controls the versions of application that you can install. When Ubuntu 18.04 was released (in April 2018 - hence the version number!), it came out with Apache 2.4.29. So, if your server runs 18.04, then even if you installed Apache with apt five years later that is still the version you would receive. This provides stability, but at an obvious cost for web designers who hanker after some feature which later versions provide. (Security patches are made to the repositories, but by "backporting" security fixes from later versions into the old stable version that was first shipped).

WHERE IS ALL THIS SETUP?

We'll be discussing the "package manager" used by the Debian and Ubuntu distributions, and dozens of derivatives. This uses the apt command, but for most purposes the competing yum and dnf commands used by Fedora, RHEL, CentOS and Scientific Linux work in a very similar way - as do the equivalent utilities in other versions.

The configuration is done with files under the /etc/apt directory, and to see where the packages you install are coming from, use less to view /etc/apt/sources.list where you'll see lines that are clearly specifying URLs to a “repository” for your specific version:

 deb http://archive.ubuntu.com/ubuntu precise-security main restricted universe

There's no need to be concerned with the exact syntax of this for now, but what’s fairly common is to want to add extra repositories - and this is what we'll deal with next.

EXTRA REPOSITORIES

While there's an amazing amount of software available in the "standard" repositories (more than 3,000 for CentOS and ten times that number for Ubuntu), there are often packages not available - typically for one of two reasons:

  • Stability - CentOS is based on RHEL (Red Hat Enterprise Linux), which is firmly focussed on stability in large commercial server installations, so games and many minor packages are not included
  • Ideology - Ubuntu and Debian have a strong "software freedom" ethic (this refers to freedom, not price), which means that certain packages you may need are unavailable by default

So, next you’ll adding an extra repository to your system, and install software from it.

ENABLING EXTRA REPOSITORIES

First do a quick check to see how many packages you could already install. You can get the full list and details by running:

apt-cache dump

...but you'll want to press Ctrl-c a few times to stop that, as it's far too long-winded.

Instead, filter out just the packages names using grep, and count them using: wc -l (wc is "word count", and the "-l" makes it count lines rather than words) - like this:

apt-cache dump | grep "Package:" | wc -l

These are all the packages you could now install. Sometimes there are extra packages available if you enable extra repositories. Most Linux distros have a similar concept, but in Ubuntu, often the "Universe" and "Multiverse" repositories are disabled by default. These are hosted at Ubuntu, but with less support, and Multiverse: "contains software which has been classified as non-free ...may not include security updates". Examples of useful tools in Multiverse might include the compression utilities rar and lha, and the network performance tool netperf.

To enable the "Multiverse" repository, follow the guide at:

After adding this, update your local cache of available applications:

sudo apt update

Once done, you should be able to install netperf like this:

sudo apt install netperf

...and the output will show that it's coming from Multiverse.

EXTENSION - Ubuntu PPAs

Ubuntu also allows users to register an account and setup software in a Personal Package Archive (PPA) - typically these are setup by enthusiastic developers, and allow you to install the latest "cutting edge" software.

As an example, install and run the neofetch utility. When run, this prints out a summary of your configuration and hardware. This is in the standard repositories, and neofetch --version will show the version. If for some reason you wanted to be have a later version you could install a developer's Neofetch PPA to your software sources by:

sudo add-apt-repository ppa:ubuntusway-dev/dev

As always, after adding a repository, update your local cache of available applications:

sudo apt update

Then install the package with:

sudo apt install neofetch

Check with neofetch --version to see what version you have now.

Check with apt-cache show neofetch to see the details of the package.

When you next run "sudo apt upgrade" you'll likely be prompted to install a new version of neofetch - because the developers are sometimes literally making changes every day. (And if it's not obvious, when the developers have a bad day your software will stop working until they make a fix - that's the real "cutting edge"!)

SUMMARY

Installing only from the default repositories is clearly the safest, but there are often good reasons for going beyond them. As a sysadmin you need to judge the risks, but in the example we came up with a realistic scenario where connecting to an unstable working developer’s version made sense.

As general rule however you:

  • Will seldom have good reasons for hooking into more than one or two extra repositories
  • Need to read up about a repository first, to understand any potential disadvantages.

RESOURCES

Some rights reserved. Check the license terms here

 

INTRO

Files on a Linux system always have associated "permissions" - controlling who has access and what sort of access. You'll have bumped into this in various ways already - as an example, yesterday while logged in as your "ordinary" user, you could not upload files directly into /var/www or create a new folder at /.

The Linux permission system is quite simple, but it does have some quirky and subtle aspects, so today is simply an introduction to some of the basic concepts.

This time you really do need to work your way through the material in the RESOURCES section!

YOUR TASKS TODAY

  • Change the ownership of a file to root
  • Change file permissions

OWNERSHIP

First let's look at "ownership". All files are tagged with both the name of the user and the group that owns them, so if we type ls -l and see a file listing like this:

-rw------- 	1 steve  staff  	4478979  6 Feb  2011 private.txt
-rw-rw-r-- 	1 steve  staff  	4478979  6 Feb  2011 press.txt
-rwxr-xr-x 	1 steve  staff  	4478979  6 Feb  2011 upload.bin

Then these files are owned by user "steve", and the group "staff". Anyone that is not "steve" or is not part of the group "staff" is considered "other". Others may still have permissions to handle these files, but they do not have any ownership.

If you want to change the ownership of a file, use the chown utility. This will change the user owner of file to a new user:

sudo chown user file

You can also change user and group at the same time:

sudo chown user:group file

If you only need to change the group owner, you can use chgrp command instead:

sudo chgrp group file

Since you created new users in the previous lesson, switch logins and create a few files to their home directories for testing. See how they show with ls -l

PERMISSIONS (SYMBOLIC NOTATION)

Looking at the -rw-r--r-- at the start of a directory listing line, (ignore the first "-" for now), and see these as potentially three groups of "rwx": the permission granted to the "user" who owns the file, the "group", and "other people" - we like to call that UGO.

For the example list above:

  • private.txt - Steve has rw (ie Read and Write) permission, but neither the group "staff" nor "other people" have any permission at all
  • press.txt - Steve can Read and Write to this file too, but so can any member of the group "staff" and anyone, i.e. "other people", can read it
  • upload.bin - Steve has rwx, he can read, write and execute - i.e. run this program - but the group and others can only read and execute it

You can change the permissions on any file with the chmod utility. Create a simple text file in your home directory with vim (e.g. tuesday.txt) and check that you can list its contents by typing: cat tuesday.txt or less tuesday.txt.

Now look at its permissions by doing: ls -ltr tuesday.txt

-rw-rw-r-- 1 ubuntu ubuntu   12 Nov 19 14:48 tuesday.txt

So, the file is owned by the user "ubuntu", and group "ubuntu", who are the only ones that can write to the file - but any other user can only read it.

CHANGING PERMISSIONS

Now let’s remove the permission of the user and "ubuntu" group to write their own file:

chmod u-w tuesday.txt

chmod g-w tuesday.txt

...and remove the permission for "others" to read the file:

chmod o-r tuesday.txt

Do a listing to check the result:

-r--r----- 1 ubuntu ubuntu   12 Nov 19 14:48 tuesday.txt

...and confirm by trying to edit the file with nano or vim. You'll find that you appear to be able to edit it - but can't save any changes. (In this case, as the owner, you have "permission to override permissions", so can can write with :w!). You can of course easily give yourself back the permission to write to the file by:

chmod u+w tuesday.txt

POSTING YOUR PROGRESS

Just for fun, create a file: secret.txt in your home folder, take away all permissions from it for the user, group and others - and see what happens when you try to edit it with vim.

EXTENSION

If all of this is old news to you, you may want to look into Linux ACLs:

Also, SELinux and AppArmour:

RESOURCES

Some rights reserved. Check the license terms here

 

INTRO

Today you're going to set-up another user on your system. You're going to imagine that this is a help-desk person that you trust to do just a few simple tasks:

  • check that the system is running
  • check disk space with: df -h

...but you also want them to be able to reboot the system, because you believe that "turning it off and on again" resolves most problems :-)

You'll be covering a several new areas, so have fun!

YOUR TASKS TODAY

  • Create a new user
  • Create a new group
  • Create a new user and add to an existing group
  • Make a new user a sudoer

Follow this demo

ADDING A NEW USER

Choose a name for your new user - we'll use "helen" in the examples, so to add this new user:

sudo adduser helen

(Names are case-sensitive in Linux, so "Helen" would be a completely different user)

The "adduser" command works very slightly differently in each distro - if it didn't ask you for a password for your new user, then set it manually now by:

sudo passwd helen

You will now have a new entry in the simple text database of users: /etc/passwd (check it out with: less), and a group of the same name in the file: /etc/group. A hash of the password for the user is in: /etc/shadow (you can read this too if you use "sudo" - check the permissions to see how they're set. For obvious reasons it's not readable to just everyone).

If you're used to other operating systems it may be hard to believe, but these simple text files are the whole Linux user database and you could even create your users and groups by directly editing these files - although this isn’t normally recommended.

Additionally, adduser will have created a home directory, /home/helen for example, with the correct permissions.

ATTENTION! useradd is not the same as adduser. They both create a new user, but they interact very differently. Check the link in the EXTENSION section to see those differences.

ADDING A NEW GROUP

Let's say we want to all of the developers in my organization to have their own group, so they can have access to the same things.

sudo groupadd developers

On most modern Linux systems there is a group created for each user, so user "ubuntu" is a member of the group "ubuntu". But if you want, you can create a new user directly into an existing group, using the ingroup flag. So a new user fred would be created like this:

sudo adduser --ingroup developers fred

ADDING AN USER TO GROUPS

Users can also be part of more than one group, and groups can be added as required.

To see what groups you're a member of, simply type: groups

On an Ubuntu system the first user created (in your case ubuntu), should be a member of the groups: ubuntu, sudo and admin - and if you list the /var/log folder you'll see your membership of the sudo group is why you can use less to read and view the contents of /var/log/auth.log

The "root" user can add a user to an existing group with the command:

usermod -a -G group user

so your ubuntu user can do the same simply by prefixing the command with sudo.

Because the new user helen is not the first user created in the system, they don't have the power to run sudo - which your user has by being a member of the group sudo.

So, to check which groups helen is a member of, you can "become helen" by switching users like this:

sudo su helen

Then:

groups

If you try to do stuff only a sudo user can do, i.e. read the contents of /var/log/auth.log, even using the prefix sudo won't work. Helen is not a sudo and has no permissions to perform this action.

Now type "exit" to return to your normal user, and you can add helen to this group with:

sudo usermod -a -G sudo helen

Instead of switching users again, simply run the groups helen to check. Try that with fred too and check how everything works.

See if any of your new users can sudo reboot.

CLEVER SUDO TRICKS

Your new user is just an ordinary user and so can't use sudo to run commands with elevated privileges - until we set them up. We could simply add them to a group that's pre-defined to be able to use sudo to do anything as root (like we did with helen) - but we don't want to give fred quite that same amount of power.

Use ls -l to look at the permissions for the file: /etc/sudoers This is where the magic is defined, and you'll see that it's tightly controlled, but you should be able to view it with: sudo less /etc/sudoers You want to add a new entry in there for your new user, and for this you need to run a special utility: visudo

To run this, you can temporarily "become root" by running:

sudo -i

Notice that your prompt has changed to a #

Now simply run visudo to begin editing /etc/sudoers - typically this will use nano.

All lines in /etc/sudoers beginning with "#" are optional comments. You'll want to add some lines like this:

# Allow user "fred" to run "sudo reboot"
# ...and don't prompt for a password
#
fred ALL = NOPASSWD:/sbin/reboot

You can add these line in wherever seems reasonable. The visudo command will automatically check your syntax, and won't allow you to save if there are mistakes - because a corrupt sudoers file could lock you out of your server!

Type exit to remove your magic hat and become your normal user again - and notice that your prompt reverts to: $

TESTING

Test by logging in as your test user and typing: sudo reboot Note that you can "become" helen by:

sudo su helen

If your ssh config allows login only with public keys, you'll need to setup /home/helen/.ssh/authorized_keys - including getting the owner and permissions correct. A little challenge of your understanding of this area!

EXTENSION

If you find this all pretty familiar, then you might like to check and update your knowledge on a couple of related areas:

RESOURCES

Some rights reserved. Check the license terms here

 

INTRO

You've now had a working Internet server of your own for some time, and seen how you can create and edit small files there. You've created a web server where you've been able to edit a simple web page.

Today we'll be looking at how you can move files between your other systems and this server - tasks like:

  • Taking a copy of some files from your server onto your desktop machine
  • Copying up some text to your server to put on your webpage
  • Uploading some photos and logos for your webpage

YOUR TASKS TODAY

  • Upload a file to the server
  • Download a file from the server
  • Synchronize a backup

PROTOCOLS

There are a wide range of ways a Linux server can share files, including:

  • SMB: Microsoft's file sharing, useful on a local network of Windows machines
  • AFP: Apple’s file sharing, useful on a local network of Apple machines
  • WebDAV: Sharing over web (http) protocols
  • FTP: Traditional Internet sharing protocol
  • scp: Simple support for copying files
  • rsync: Fast, very efficient file copying
  • SFTP: file access and copying over the SSH protocol (Despite the name, the SFTP protocol at a technical level is completely unrelated to traditional FTP)

Each of these have their place, but for copying files back and forth from your local desktop to your server, SFTP has a number of key advantages:

  • No extra setup is required on your server
  • Top quality security
  • Allows browsing through the directory structure
  • You can create and delete folders

If you’re successfully logging in via ssh from your home, work or a cybercafe then you'll also be able to use SFTP from this same location because the same underlying protocol is being used.

By contrast, setting up your server for any of the other protocols will require extra work. Not only that, enabling extra protocols also increases the "attack surface" - and there's always a chance that you’ll mis-configure something in a way that allows an attacker in. It's also very likely that restrictive firewall policies at a workplace will interfere with or block these protocols. Finally, while old-style FTP is still very commonly used, it sends login credentials "in clear", so that your flatmates, cafe buddies or employer may be able to grab them off the network by "packet sniffing". Not a big issue with your "classroom" server - but it's an unacceptable risk if you're remotely administering production servers.

SFTP client software

What’s required to use SFTP is some client software. A command-line client (unsurprisingly called sftp) comes standard on every Apple OSX or Linux system. If you're using a Linux desktop, you also have a built-in GUI client via your file manager. This will allow you to easily attach to remote servers via SFTP. (For the Nautilus file manager for example, press ctrl + L to bring up the 'location window" and type: sftp://username@myserver-address).

Although Windows and Apple macOS have no built-in GUI client there are a wide range of third-party options available, both free and commercial. If you don't already have such a client installed, then choose one such as:

  • WinSCP or FileZilla - for Windows users
  • CyberDuck or FileZilla - for macOS users

Download locations are under the RESOURCES section.

Configuring and using your choice of these should be straightforward. The only real potential for confusion is that these clients generally support a wide range of protocols such as scp and FTP that we're not going to use. When you're asked for SERVER, give your server's IP address, PORT will be 22, and PROTOCOL will be SFTP or SSH.

INSTRUCTIONS

  • Configure your chosen SFTP client to login to your server as your username
  • Copy some files from your server down to your local desktop (try files from your "home" folder, and from /var/log)
  • Create an "images" folder under your "home" folder on the server, and upload some images to it from your desktop machine
  • Go up to the root directory. You should see /etc, /bin and other folders. Try to create an "images" folder here too - this should fail because you are logging in as an ordinary use, so you won't have permission to create new files or folders. In your own "home" directory you of course have full permission.

Once the files are uploaded you can login via ssh and use sudo to give yourself the necessary power to move files about.

RESOURCES

Some rights reserved. Check the license terms here

 

INTRO

Today we’ll look at how you find files, and text inside these files, quickly and efficiently.

It can be very frustrating to know that a file or setting exists, but not be able to track it down! Master today’s commands and you’ll be much more confident as you administer your systems.

Today you’ll look at some useful tools:

  • locate
  • find
  • grep
  • which

YOUR TASKS TODAY

  • Find all files that have the word "Permission" in it

INSTRUCTIONS

locate

If you're looking for a file called access.log then the quickest approach is to use "locate" like this:

$ locate access.log
/var/log/apache2/access.log
/var/log/apache2/access.log.1
/var/log/apache2/access.log.2.gz

(If locate is not installed, do so with sudo apt install mlocate)

As you can see, by default it treats a search for "something" as a search for "*something*". It’s very fast because it searches an index, but if this index is out of date or missing it may not give you the answer you’re looking for. This is because the index is created by the updatedb command - typically run only nightly by cron. It may therefore be out of date for recently added files, so it can be worthwhile updating the index by manually running: sudo updatedb.

find

The find command searches down through a directory structure looking for files which match some criteria - which could be name, but also size, or when last updated etc. Try these examples:

find /var -name access.log
find /home -mtime -3

The first searches for files with the name "access.log", the second for any file under /home with a last-modified date in the last 3 days.

These will take longer than locate did because they search through the filesystem directly rather from an index. Also, because find uses the permissions of the logged-in user you’ll get “permission denied” messages for many directories if you search the whole system. Starting the command with sudo of course will run it as root - or you could filter the errors with grep like this: find /var -name access.log 2>&1 | grep -vi "Permission denied".

These examples are just the tip of a very large iceberg, check the articles in the RESOURCES section and work through as many examples as you can - time spent getting really comfortable with find is not wasted.

grep -R

Rather than asking "grep" to search for text within a specific file, you can give it a whole directory structure, and ask it to recursively search down through it, including following all symbolic links (which -r does not). This trick is particularly handy when you "just know" that an item appears "somewhere" - but are not sure where.

As an example, you know that “PermitRootLogin” is an ssh parameter in a config file somewhere under /etc, but can’t recall exactly where it is kept:

grep -R -i "PermitRootLogin" /etc/*

Because this only works on plain text files, it's most useful for the /etc and /var/log folders. (Notice the -i which makes the search “case insensitive”, finding the setting even if it’s been entered as “Permitrootlogin”

You may now have logs like /var/log/access.log.2.gz - these are older logs that have been compressed to save disk space - so you can't read them with less, or search them with grep. However, there are zless and zgrep, which do work, and on ordinary as well as compressed files.

which

It's sometimes useful to know where a command is being run from. If you type nano, and it starts, where is the nano binary coming from? The general rule is that the system will search through the locations setup in your "path". To see this type:

echo $PATH

To see where nano comes from, type:

which nano

Try this for grep, vi and service and reboot. You'll notice that they’re typically always in subfolders named bin, but that there are several different ones.

EXTENSION

The -exec feature of the find command is extremely powerful.

But "finding things" can go so much further than that! You can not only track down the content of a file, but also its usage with commands like lsof and fuser.

Test some examples of this from the RESOURCES links.

RESOURCES

TROUBLESHOOT AND MAKE A SAD SERVER HAPPY!

Practice what you've learned with some challenges at SadServers.com:

Some rights reserved. Check the license terms here

 

INTRO

Linux has a rich set of features for running scheduled tasks. One of the key attributes of a good sysadmin is getting the computer to do your work for you (sometimes misrepresented as laziness!) - and a well configured set of scheduled tasks is key to keeping your server running well.

YOUR TASKS TODAY

  • Schedule a job to apt update and apt upgrade everyday

CRON

Each user potentially has their own set of scheduled task which can be listed with the crontab command (list out your user crontab entry with crontab -l and then that for root with sudo crontab -l ).

However, there’s also a system-wide crontab defined in /etc/crontab - use less to look at this. Here's example, along with an explanation:

SHELL=/bin/sh
PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin

# m h dom mon dow user  command
17 *	* * *   root	cd / && run-parts --report /etc/cron.hourly
25 6	* * *   root	test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.daily )
47 6	* * 7   root	test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.weekly )
52 6	1 * *   root	test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.monthly )

Lines beginning with "#" are comments, so # m h dom mon dow user command defines the meanings of the columns.

Although the detail is a bit complex, it's pretty clear what this does. The first line says that at 17mins after every hour, on every day, the credential for "root" will be used to run any scripts in the /etc/cron.hourly folder - and similar logic kicks off daily, weekly and monthly scripts. This is a tidy way to organise things, and many Linux distributions use this approach. It does mean we have to look in those /etc/cron.* folders to see what’s actually scheduled.

On your system type: ls /etc/cron.daily - you'll see something like this:

$ ls /etc/cron.daily
apache2  apt  aptitude  bsdmainutils  locate  logrotate  man-db  mlocate  standard  sysklog

Each of these files is a script or a shortcut to a script to do some regular task, and they're run in alphabetic order by run-parts. So in this case apache2 will run first. Use less to view some of the scripts on your system - many will look very complex and are best left well alone, but others may be just a few lines of simple commands.

Look at the articles in the resources section - you should be aware of at and anacron but are not likely to use them in a server.

Google for "logrotate", and then look at the logs in your own server to see how they've been "rotated".

SYSTEMD TIMERS

All major Linux distributions now include "systemd". As well as starting and stopping services, this can also be used to run tasks at specific times via "timers". See which ones are already configured on your server with:

systemctl list-timers

Use the links in the RESOURCES section to read up about how these timers work.

RESOURCES

Some rights reserved. Check the license terms here

view more: ‹ prev next ›