Ubuntu Hardening Guide – Basic (QGR)

In the past few weeks we have gone through setting up a LEMP stack on Ubuntu to run our WordPress site.

As this is a web server and will be exposed to the internet we need to make sure we do some additional configuration regardless of if it sits behind a Next-Gen Firewall, or Web App Firewall. Perimeter security is no longer sufficient, we need to harden the operating system to provide some strength in depth.

Now you shouldn’t look at this as an all or nothing situation, or that hardening the operating system is something you do once and that’s it. You can start with the basics which greatly reduces your exposure, but it needs to be monitored and maintained over time.

I start with the basics and then dependent on the resources available and the value of the server/data I make improvements over time, this is a great way to learn.

Below are some resources for further information on hardening Ubuntu.

Ubuntu Hardening Wiki


NCSC Hardening Guide


UFW guide


As this is a quick guide we won’t be going into too much detail for each setting, but as always I encourage you to look into this yourself so you understand what we are doing.

In-depth SSH keys guide


Bitvise download


The steps we will take are;

  • Enable ufw. This is a built in host-based firewall.
  • Install fail2ban. This is an intrusion prevention system (IPS) which looks for patterns to identify attacks and block the offending IP addresses.
  • Secure shared memory. Shared memory can be used to attack running services so we need to secure it.
  • Create a non-root user, and grant sudo privileges.
  • Enable key pair SSH login.
  • Disable SSH password authentication and root ssh login.
  • Disable any graphical User interface. (X11 Forwarding)
  • Disconnect idle sessions.
  • Allow/Deny users

Make a back up

Make a back up, now we can continue.

Enable ufw

This configuration is based on the assumption we are using port 22 for SSH, and have ports 80 and 443 open for web services.

ufw is installed by default so you can just enable without the need to install.

sudo ufw enable

And open the 2 ports we need for connecting to it

sudo ufw allow 22/tcp
sudo ufw allow 80/tcp
sudo ufw allow 443/tcp

You can check status using the below

sudo ufw status


First up, let’s install.

sudo apt-get install fail2ban

fail2ban will work right out of the box, but we can make some small adjustments. Rather than make changes directly to the default config file located at “/etc/fail2ban/jail.conf” we can create a new file named jail.local using the following command.

sudo nano /etc/fail2ban/jail.local

Then add the following to the new file.

enabled = true
port = 22
filter = sshd
logpath = /var/log/auth.log
maxretry = 5

This monitors for brute force login attempts on port 22, now just save and close the file then restart fail2ban

sudo systemctl restart fail2ban

Remember that this will also block YOUR IP if you fail 5 login attempts. You can adjust the time out settings in the “/etc/fail2ban/jail.conf” file.

Protect shared memory space

We need to edit the /etc/fstab file and make a settings change.

sudo nano /etc/fstab

Now we add the following line to the bottom of the file, before saving and closing.

tmpfs /run/shm tmpfs defaults,noexec,nosuid 0 0

This will require a restart which is a simple command.

sudo reboot

Create non-root user with sudo privileges

Logging into your Linux box as root and using it as your everyday account is the same as just using the domain admin or local administrator account as a normal user account. If you don’t understand how bad this is you should go find out!

We will create a standard user account but grant them sudo privileges so you do not have to log out each time you need root. You will able to use the ‘sudo’ command enter a password to elevate to the needed permission level. Let’s get started.

Log in as root and run the following commands picking your own username

adduser newuser

Choose a strong password, and then answering the other questions are optional. Next we assign sudo to the new user.

usermod -aG sudo newuser

That’s it. If you want to change user just use (replace newuser with the desired username)

 su newuser

Enable key pair ssh login.

This is much more secure than password authentication, and again if you do not understand why, I really encourage you to go and find out. As most of you will be connecting to your server via Windows machine I’ll cover setting this up using an ssh gui client, however if you are connecting from a Linux box follow the link in the resources section for a quick way to perform this process via command line.

I’m a fan of Bitvise ssh (link in resource section above) and from here we can easily create our key pair from the login tab and selecting “Client key manager”.

Generate new as shown

Make note of the profile number, and choose a strong passphrase. You can choose a larger key size if you wish but do not go lower.

Now below we can see our new key, and the export button to create a file of our public key.

Export as shown below and remember where you have saved it.

Now login to your Ubuntu machine using your new account and check for the ssh directory, and if we don’t have one we need to create it. The following command will create the directory if it does not exist and do nothing if it does.

mkdir -p ~/.ssh

Now browse here using cd and locate the “authorized_keys” file.

cd ~/.ssh

If “authorized_keys” file is not there then create it

sudo nano authorized_keys 

No go back to your exported file on your local machine and paste the contents into the new file on your Ubuntu machine. This file should start with “ssh-rsa”, then save and close the file.

Now we set permissions making sure you replace “newuser” with your own username

chmod -R go= ~/.ssh
chown -R newuser:newuser ~/.ssh

Now we need to test that this works, so disconnect from the remote session and attempt to login using public key authentication instead of a password. In Bitvise select the correct profile, and pubkey as shown. Obviously you will need the username and host address to connect.

The first time you connect you will receive a warning about key verification, but as long as you are sure you are sure you connecting to the correct host you can accept this the first time you connect. Should you ever see this error again when connecting to this machine you should verify the keys to ensure you are not a victim of malicious activity.

You will then be prompted for the key pair passphrase (not your login password), enter this and you should be logged in. Well done now we need to disable password authentication, and root ssh login.

Disable password authentication and root ssh login

This is a quick and simple one, we just need to edit the sshd_config file and set the options to no.

sudo nano /etc/ssh/sshd_config

Disable any graphical User interface. (X11 Forwarding)

This is set in the same file, find the option and set to no

Disconnect idle sessions

Again, in the same file change the following settings

This setting will check once after 15 minutes of inactivity and close the connection. If you want a longer interval just increase the setting which is in seconds.

Allow/Deny users

Again in the same file we can provide an allow list of users who are permitted to remotely connect over ssh to the machine. You will need to add this manually to the bottom of the file.

Make sure there are no typos and you have added everyone you need to as once this is set if not on the list you will not be able to login via ssh and may be completely locked out of your server. Double check before loading the new config.

To load the new settings we need run the following commands.

sshd -t 

If you receive an error go back and check you changes as you have made a mistake somewhere. If not then restart the service to apply the changes.

sudo service sshd restart

Well done, in the future we will look at more advanced hardening techniques.

Update nginx to latest version

Believe it or not if you install nginx on Ubuntu 18.04 using the default repositories then you get nginx version 14! This version is not even maintained anymore, it’s like installing Office 2003 on you new Windows 10 machine, crazy right?

If you have used our quick guide for installing WordPress on the LEMP stack then this is the version you will have installed.

Let’s show how to fix it then….

First up, let’s add the repository by adding a new .list file

sudo nano /etc/apt/sources.list.d/nginx.list

Then add the following lines to tell our install where to go to get the latest version

deb [arch=amd64] http://nginx.org/packages/mainline/ubuntu/ bionic nginx
deb-src http://nginx.org/packages/mainline/ubuntu/ bionic nginx

CTRL X to exit, then Y to confirm changes and hit enter.

Next we need the nginx public key, so run the following to download it

wget http://nginx.org/keys/nginx_signing.key

Then add the key

sudo apt-key add nginx_signing.key

You should get an “OK” message in the console.

This is where it gets scary, so if you haven’t backed up, do it now. NO……. do it now.!

We remove all the current version components, but first we make a copy of our config file, just in case.

sudo cp /etc/nginx/nginx.conf /etc/nginx/nginx.conf.old

If you are particularly paranoid you can check it’s there

cd /etc/nginx/

Now after using “cd” we are back at the base directory and we can remove the old version of nginx, but first let’s update apt.

sudo apt-get update
sudo apt remove nginx nginx-common nginx-full nginx-core

‘Y’ to continue, and away it goes

We do not even need to reboot, how cool is that? (This ain’t Windows!) so let’s install the latest version

sudo apt-get install nginx -y

You should get no errors as shown below

Run the next two commands individually

sudo systemctl start nginx
sudo systemctl enable nginx

Then check your version

If you have reinstalled version 14, then you did not run “sudo apt-get update” after adding the keys and the new resources list, doh!. (not that I’ve ever done that!)

We are not quite there yet, especially if you have a website already running on the server.

There are 2 additions we need to make if we are using the standard config. (If you have used guides on this site then you will need to do this)

We need to make two changes in the nginx.conf file

sudo nano /etc/nginx/nginx.conf

in this file we need to change “user nginx;” to “user www-data;” which is right at the top.

Then we need to add the following to the bottom of the file “include /etc/nginx/sites-enabled/*;”

Both shown below

Then we reload nginx and we’re done.

sudo systemctl reload nginx

Well done. If it all breaks then I really encourage you to retrace your steps and try and resolve the issue. It’s already broken so who cares if you break it more right? And if it’s that bad, that’s why we made a back-up? You did make a back-up………right?

How to install WordPress on LEMP. (QGR)

This is gonna be the first of a “quick guide with resource links” series where I’ll pick a subject and just post quick links to guides, warn of pitfalls you may encounter, or things you still need to do. (this is for my own future reference as well!)

Hello there…..so…..recently I had to migrate this site to the cloud after years of being hosted in my home lab, and thought I’d do a quick write up while I am at it. Previously I have posted step by step guides for installing my fave stack, but there are so many really good guides I’m just gonna direct you to the best one’s out there for the initial install.

Digital Ocean have some great guides online and are generally where I look first for anything LEMP related. I did have a look at Apache, but I do still prefer nginx as my webserver. There is a good article here; https://kinsta.com/blog/nginx-vs-apache/ if you want to delve into comparisons.

Don’t just follow these word for word, you will need a little bit knowledge to tailor to your instance if not using Digital Ocean hosting. If you have followed previous guides on my site you will have enough knowledge to follow these. Not to say these aren’t good enough for absolute beginners but if you follow blindly with no basic concept of LEMP, Ubuntu or Linux you will come unstuck.

Links To Resources

The list of articles below are in order of install;

Initial Server Set-up post-install


Installing the LEMP stack


Installing WordPress on the LEMP stack


Setting up your SSH keys



  • Make sure you note any user account names or passwords you create.
  • Double check, then check again before you disable SSH root login, or password authentication. (You’ll only do it once if you lock yourself out of your own server!)
  • Make a backup before each step. You will mess up, or something will go wrong, just make sure you do not have to start from scratch!

Be aware

  • If you publish a WordPress site on the internet, then within minutes it will be scanned and you will see brute for login attempts. (Even if it is just to blog about the progress of mold on your shower curtain)
  • There is still more to do to ensure your site has a decent level of protection
  • If you create a private key, (for anything) make, make, make sure you keep this private. Don’t make loads of copies and forget to delete them.
  • Don’t assume because you ran “apt update” that you have the latest version of any software. Check manually – We will cover this in the future regarding nginx. (Now up) https://info.2code-monte.co.uk/2020/05/02/update-nginx-to-latest-version/

That’s it for this, I may come back and add or update if I find something else which helps.

Don’t ignore download warnings

In the next of our short videos we show why download warnings should not be ignored. We are using a Windows 7 machine just for ease, this will also work in Windows 10 (I haven’t gotten around to updating all my test victim machines yet!)

When you are browsing the internet and trying to find what you are looking for; one thing you can guarantee is that there will be thousands of malicious sites pretending to be the website you need.

Here our user is looking for some free software to play a video file, after a google search goes to a site they think will have what they want. The site prompts that they need to update their browser, how responsible of them to make sure I am up to date. The update is downloaded, and the browser warns there is an issue with this. However the user is impatient and just wants the software, ignores the warning and installs it. That’s why they have anti-virus right? If it is malicious that what it’s there for.

They continue with the download and run the file. and in the left hand side machine you will see (as you have seen in previous videos) just how quickly this happens, and just how quickly the cyber criminal can take screenshots, pop messages on the screen, and control the machine which we show by launching Windows programs such as calculator and notepad.

The importance of updating

Another quick video to show just how quickly a server can be compromised and taken over completely by an attacker.

In this video we have a server running an out of date and un-patched application, which gives the attacker a way onto the server. Then the attacker dumps and cracks the password hashes, which gives persistent remote (using ssh) access to the system. The attacker can then continue to access the server for whatever purpose they wish

Then the attacker changes the root (admin) password potentially resulting in no one else having admin access to the system. Allowing them to hold the system to ransom or threatening to take it off line to disrupt the business function, or continue to search and remove data unhindered.

This all happens in under 4 minutes. Always stay as up to date with versions and patches as possible.

The Importance of Encryption. Simple Demo

This is a quick video which shows in a very basic way how important encryption is. It is important to practice defense in depth, so even if an attacker manages to gain persistence on your network and is able to “man-in-the -middle” your network connections, encryption gives another layer of protection meaning communication is not in clear text, preventing login credentials being captured.

It’s important remember, that just because an attacker has gained a foothold, does not mean they can stay there or actually do anything. It standard user permissions are well controlled, then the attacker will need to elevate their privileges. One way of doing this is capturing passwords.

The more steps an attacker needs to take to carry out their intended actions to more chance you will have to hopefully detect them on the netywork.

Here we simulate an IT engineer logging in a server terminal session, and showing how encryption protects the connection compared to telnet which communicates in clear text.

Windows Event Forwarding Additional Configuration and Fine Tuning. (Free SIEM part 5)

We are going to quickly touch on something which frustrated me for a short while and it is related to the default configuration used by “WECUTIL” when setting up WEF (Windows Event Forwarding).

Previously I had always forwarded logs from my endpoints into Graylog using either nxlog/syslog or OSSEC so had never had this issue before. I noticed after setting up WEF that my logs in Graylog did not contain the full message field which it always had previously. At this point I’d like to point out that I do not need this field in all my logs however it is nice to have in some cases so I wanted to look at why.

It was due to the default setting of WECUTIL when setting up WEF. It is set to “RenderedText”. When this is set the messages for our test domain appear as below.

To enable us to get the full message we need to run the following command on the Event Log Server from an elevated Powershell Window. (Make sure to replace “name of subscription” with the name of your own subscription. You can run the command without specifying a subscription name but I don’t recommend doing this as it may create a hell of a lot of traffic and crash your network. Do a test first if you want to enable this, create a new subscription for a single eventID then apply this change and monitor. Only if you are happy should you roll this out to all machines.

wecutil ss "name of subscription" /cf:Events

If this causes issues you can roll back using the command below;

 wecutil ss "name of subscription" /cf:RenderedText

Let’s assume all is OK after it is enabled and take a look at the differences in the forwarded messages in Graylog.

As I said previously, this is useful in some cases depending on your setup, and if you are sending them to a SIEM or not. I just thought I’d show that this can be configured natively in Windows if required. It was something I did not know about so it might help someone else.

Set Up Windows Event Forwarding with Sysmon using Group Policy. (Free SIEM Part 3)

This is the third tutorial in the “Free SIEM” series.

Today the aim is to set up log forwarding to a central log Server from all our end points with Group Policy, and as an added bonus we are going to forward all Sysmon logs as well.

For the topology we have a Domain Controller (DC), and separate Event Log collector server (EL), and other Windows Desktops on the domain (WD).

First we open Group Policy Management Console on our DC, to create a new GPO for our forwarding rules. For the purpose of this tutorial our test domain is named “glitchcorp.co.uk”, wherever you see this you should replace with your own FQDN.

Our new Policy is named “Event Forwarding”

Go to “Computer Configuration/Policies/Administrative Templates/Windows Components/Event Forwarding” to create our Target Subscription – basically the log server which will be collecting all the forwarded logs (EL). Right-click the highlighted option.

Enable the setting and then copy the highlighted text and add your server details and set the final option (Refresh=) to 60 as shown.

Save the configuration. Now we set permissions for the Security log to ensure it can be read. Go to; “Computer Configuration/Policies/Administrative Templates/Windows Components/Event Log Service/Security” Right-click and “edit”

Enable the setting, but we need to permission string for the “Log Access” box. For this we need to open Powershell.

We use “wevtutil.exe” to get the existing permissions and add the new account to the end. Run the command below then copy the string that is returned. Paste this into your “Log Access” box but at the end add either (A;;0x1;;;NS) or (A;;0x1;;;S-1-5-20). This will give the “NETWORK SERVICE” read access to the logs. (NOTE: Due to the way Sysmon works this will not grant access to Sysmon logs. We will set this in the Registry using a different method)

Save your settings. Next we go to “Computer Configuration/Policies/Windows Settings//Security Settings/Restricted Groups” Right-click and Add Group as shown.

Then add the members as shown. (You only need one entry for the NETWORK SERVICE but I had some issues so added both ways here then saved. If it identifies both without issues, then keep “NT AUTHORITY\NETWORK SERVICE”, and remove the other). Save your settings.

Now we make the Registry change for Sysmon log permissions.

Go to; “Computer Configuration/Preferences/Windows Settings\Registry” and Right-click to add new Registry Item.

Complete as shown. The full path is shown below, and the Value data is the same as we used earlier.

This is the “key path”

This is a reminder of the Powershell query

Paste this into your “Value Data” box but at the end add either (A;;0x1;;;NS) or (A;;0x1;;;S-1-5-20) as before. This will give the “NETWORK SERVICE” read access to the logs. Save your settings.

NOTE: Don’t run the below command. This is just to show basically what the Registry entry is doing, and give you some understanding. You could run this command if you were forwarding logs from a single machine but in a large environment you should use Group Policy to prevent using lot’s of scripts, or running the same thing over and over on each individual machine.

OK so we have now setup the Log forwarding location, and the permissions required, now we need to ensure the required services are running on the source computers on the Domain so they can forward the logs to our collecting server.

Browse to; Computer Configuration/Preferences/Control Panel Settings/Services/

Right-click and select New Service

Complete as shown

Save your settings then do the same for Sysmon.

You should have 2 entries as shown.

Now we need to configure the Firewalls to listen, and allow them to “push” the Event Logs to the EL server.

Go to; “Computer Configuration/Policies/Administrative Templates/Windows Components/Windows Remote Management/WinRM Service” and right-click the highlighted options.

Configure as shown.

Save your settings. Now we go to “Computer Configuration/Policies/Windows Settings/Security Settings/Windows Firewall with Advanced Security”

Right-click “Inbound Rules” New Rule

Complete each box as shown

That’s the GP work completed so let’s open powershell on the DC and update the domain policy. You can also run this command on any endpoints you want to ensure are up to date with these settings reading for testing.

If you want to test that an endpoint is receiving the new policy you can use the command below. You can see under “Applied Group Policy Objects” our “Event Forwarding” policy is there.

NOTE: Each of the endpoints you will be sending logs from may need to have the following command run from an elevated Powershell window “WinRM quickconfig”.

It all depends on what OS you are running, but if it is already running, this command will not do any harm. If running Win 10/Server 2012 R2 it should already be running.

We head over to our EL server now and start to complete the set up on the collector. Run the below from an elevated powershell window.

Then open Event Viewer

Let’s create our first subscription. Right-click and create a new Subscription.

That’s the Sysmon Subscription sorted, now we need one for the other Windows logs.

Right-click and repeat with different settings this time as shown.

Enable both Subscriptions so they have the green tick.

You can right-click each one and check “Runtime status” this will show a list of connected machines.

Now go to “Forwarded Events” and watch all your logs come through. Make sure you are seeing entries for “Sysmon”, “Application”, “Security”, “Setup” and “System”. (Although in my screen shot all you can see is Sysmon lol!)

Congratulations! Yes it’s a bit of a slog but it is worth it. Make sure you come back for part 4.

Important Event ID's you should be monitoring in Windows.

4756A member was added to a security-enabled universal group
4740A User account was Locked out
4735A security-enabled local group was changed
4732A member was added to a security-enabled local group
4728A member was added to a security-enabled global group
4724An attempt was made to reset an accounts password
4648A logon was attempted using explicit credentials
4625An account failed to log on
1102The Audit Log was cleared
4624An accout was successfully logged on
4634An account was logged off
5038Detected an invalid image hash of a file
6281Detected an invalid page hash of an image file
1000Application Error
1002Application Hang- Crash
1001Application Error – Fault Bucket
104Event Log Cleared
1102The Audit Log was cleared
4719System Audit Policy was changed
6005Event log Service Stopped
7022 – 7026,
Windows Services Fails or crashes
7045A service was installed in the system
4697A service was installed in the system
104Event log was cleared
6New Kernel Filter Driver
2005A Rule has been modified in the Windows firewall Exception List
2004Firewall Rule Add
Firewall Rules Deleted
23Session Logoff Scceeded
24Session has been disconnected
25Session Reconnection Succeded
1102Client has initiated a multi-transport connection

Install Graylog 3 on Ubuntu 18.04 (Free SIEM Part 2)

Hello all, this is the first of a new series of posts which will show you how to setup a free centralised logging solution for any environment.

After much trial and error I think I’m set on using Graylog, Windows Event forwarding, Sysmon, and OSSEC/Wazuh.

All the official documentation for Graylog can be found here: Graylog Docs

Ubuntu is still my favourite flavour of Linux so we will be starting with the base install of Server version 18.04.

Let’s get started, as always we start by updating the repository

sudo apt-get update

And if required upgrade your install. (If you are starting with a fresh install  but didn’t tick “download updates from the internet” you will need to do this)

sudo apt-get upgrade

Now we are running up to date let’s start with installing the dependencies. First up are these 4 packages, make sure you do all these steps in order or it will not work.

 sudo apt-get install apt-transport-https openjdk-8-jre-headless uuid-runtime pwgen 

If you get no errors when installing we move on to installing mongodb from the official repository.

 sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv 9DA31620334BD75D9DCB49F368818C72E52529D4 
 echo "deb [ arch=amd64 ] https://repo.mongodb.org/apt/ubuntu bionic/mongodb-org/4.0 multiverse" | sudo tee /etc/apt/sources.list.d/mongodb-org-4.0.list
 sudo apt-get update
sudo apt-get install -y mongodb-org 

If again you receive no errors, we move on to enabling it on start up.

sudo systemctl daemon-reload
sudo systemctl enable mongod.service
sudo systemctl restart mongod.service

Graylog recommends using Elasticsearch version 6. You can find the installation guide here if you need to refer to it, but you can install using the following. (This is not the latest version, which is not supported so don’t be tempted to try it)

 wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add - 
 echo "deb https://artifacts.elastic.co/packages/oss-6.x/apt stable main" | sudo tee -a /etc/apt/sources.list.d/elastic-6.x.list 
 sudo apt-get update && sudo apt-get install elasticsearch-oss 

Before we can configure and start Elasticsearch we need to edit the configuration file which is located at “/etc/elasticsearch/elasticsearch.yml”

We cd to the correct directory

cd /etc/elasticsearch

Then open the file

sudo nano elasticsearch.yml

then find the following line, remove the ‘#’ to uncomment the line and set the cluster.name property to “graylog” as shown below.

cluster.name: graylog

You also need to add the below to the config file.

 action.auto_create_index: false 

Now start Elasticsearch, and enable it at startup.

sudo systemctl daemon-reload
sudo systemctl enable elasticsearch.service
sudo systemctl restart elasticsearch.service

Now we are ready to install Graylog, cd into your download or tmp directory and download the latest repo config.

 wget https://packages.graylog2.org/repo/packages/graylog-3.0-repository_latest.deb

First we unpack the download and then install graylog using apt.

  sudo dpkg -i graylog-3.0-repository_latest.deb
  sudo apt-get update && sudo apt-get install graylog-server  

Now don’t get carried away, because there is still a bit of work to do before graylog will start.

All the instructions we are contained in the following file “/etc/graylog/server/server.conf”

we can open it directly using the following;

sudo nano /etc/graylog/server/server.conf

Take the time to read through the instructions, it will help you to understand a little of what you are doing. With that in mind, let’s continue. Exit nano using CTL and X.

First we create our “password_secret” from the cmd line. using the below cmd to create the hash.

 pwgen -N 1 -s 96

Then open and save the config again and paste the resulting hash into the config file after “password_secret = ”

 sudo nano /etc/graylog/server/server.conf 

Save and exit, then we create our “root_password_sha2” (Remember this as you will need it to login to graylog later on) in a similar way from the cmd line so save your change and exit the config file.

You could run “echo -n yourpasswordhere | shasum -a 256” as suggested in the config file however the online guidance is to use the below.

 echo -n "Enter Password: " && head -1 </dev/stdin | tr -d '\n' | sha256sum | cut -d" " -f1 

Copy and paste this new hash value into the server.conf file after “root_password_sha2”

OK, so now we will be connecting to graylog over http, to be able to use https we need to configure a proxy server which wont be covered here, so always connect over a vpn if in production and you are not using https. Don’t make the web interface externally available. To configure https have a look at the docs here

Also you should enable the host firewall to only allow ports 22, 9000, and 8514, however don’t enable it yet. Get it setup and confirmed as working, then enable your firewall, as we will show later.

To configure the web interface we need to set two further options in the same server.conf file. These options are; “rest_listen_uri” and “web_listen_uri”

Get the IP of your server with the ifconfig cmd, then paste it into the location shown below and make sure the the line doesn’t have a ‘#’ at the start of the line meaning they are commented out. If the ‘#’ is there remove it. this sets both the Web interface: and REST API: options.

http_bind_address =  yourIPaddress:9000/

Save and close the file. If you want more information on configuring the web interface see the documentation here

All that’s left to do is start and configure graylog to enable at startup

sudo systemctl daemon-reload
sudo systemctl enable graylog-server.service
sudo systemctl start graylog-server.service

That’s it, give your server a restart with the following

sudo shutdown now -r

Browse to “yourIPaddress:9000/” and you should be greeted with the following login box. If not, try manually restarting all the services (mongobd, graylog and elasticsearch) using the steps through this guide and see if that resolves it. If not, you’ve done something else wrong!

Now we we know we can connect let’s enable the firewall

sudo ufw enable

And open the 2 ports we need for connecting to it

sudo ufw allow 22
sudo ufw allow 9000

You can check status as below

sudo ufw status

You can also check the status of graylog as shown below

sudo systemctl status graylog-server.service

If you have any issues you can use the following command to view the logs and look for clues.

 sudo tail -f /var/log/graylog-server/server.log 

Come back for the next part as we setup a complete SIEM and logging system.