The Conscience of a Hacker

==Phrack Inc.==

Volume One, Issue 7, Phile 3 of 10

=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
The following was written shortly after my arrest...

\/\The Conscience of a Hacker/\/

by

+++The Mentor+++

Written on January 8, 1986
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=

Another one got caught today, it's all over the papers. "Teenager
Arrested in Computer Crime Scandal", "Hacker Arrested after Bank Tampering"...
 Damn kids. They're all alike.

But did you, in your three-piece psychology and 1950's technobrain,
ever take a look behind the eyes of the hacker? Did you ever wonder what
made him tick, what forces shaped him, what may have molded him?
 I am a hacker, enter my world...
 Mine is a world that begins with school... I'm smarter than most of
the other kids, this crap they teach us bores me...
 Damn underachiever. They're all alike.

I'm in junior high or high school. I've listened to teachers explain
for the fifteenth time how to reduce a fraction. I understand it. "No, Ms.
Smith, I didn't show my work. I did it in my head..."
 Damn kid. Probably copied it. They're all alike.

I made a discovery today. I found a computer. Wait a second, this is
cool. It does what I want it to. If it makes a mistake, it's because I
screwed it up. Not because it doesn't like me...
 Or feels threatened by me...
 Or thinks I'm a smart ass...
 Or doesn't like teaching and shouldn't be here...
 Damn kid. All he does is play games. They're all alike.

And then it happened... a door opened to a world... rushing through
the phone line like heroin through an addict's veins, an electronic pulse is
sent out, a refuge from the day-to-day incompetencies is sought... a board is
found.
 "This is it... this is where I belong..."
 I know everyone here... even if I've never met them, never talked to
them, may never hear from them again... I know you all...
 Damn kid. Tying up the phone line again. They're all alike...

You bet your ass we're all alike... we've been spoon-fed baby food at
school when we hungered for steak... the bits of meat that you did let slip
through were pre-chewed and tasteless. We've been dominated by sadists, or
ignored by the apathetic. The few that had something to teach found us will-
ing pupils, but those few are like drops of water in the desert.

This is our world now... the world of the electron and the switch, the
beauty of the baud. We make use of a service already existing without paying
for what could be dirt-cheap if it wasn't run by profiteering gluttons, and
you call us criminals. We explore... and you call us criminals. We seek
after knowledge... and you call us criminals. We exist without skin color,
without nationality, without religious bias... and you call us criminals.
You build atomic bombs, you wage wars, you murder, cheat, and lie to us
and try to make us believe it's for our own good, yet we're the criminals.

Yes, I am a criminal. My crime is that of curiosity. My crime is
that of judging people by what they say and think, not what they look like.
My crime is that of outsmarting you, something that you will never forgive me
for.

I am a hacker, and this is my manifesto. You may stop this individual,
but you can't stop us all... after all, we're all alike.

+++The Mentor+++

 

DynDNS Pt.II – Gandi LiveDNS REST API

This is the followup Post to my previous post dynamic-dns-update-domain-record-yourself

My Registrar Gandi.net is updating their website and has also created a new Domain Name Record System which can be accessed by a new API. So the old Post is already outdated if you have switched to the new DNS System with your domain.

So i had to write a new updating script gandi-live-dns to keep my various subdomains at home up to date whenever my ISP thinks the IP needs to be changed.

The API is described here in the Docs and works well enough, regardless the BETA state.

API Key

Start by retrieving your API Key from the “Security” section in new Account admin panel to be able to make authenticated requests to the API.

Then make sure you replace “XXX” with your API Key in all the following examples.

API Curl Example and find your UUID/Zone File ID

The Zone File Logic has been changed to UUID Zone Files. The UUID can be queried with CURL easily.

Info on domain “DOMAIN”

  • GET /domains/<DOMAIN>:

    curl -H 'X-Api-Key: XXX' https://dns.beta.gandi.net/api/v5/domains/<DOMAIN>
    

    return:

    {"zonedata": "/zones/<UUID>/records",
     "fqdn": "<DOMAIN>",
     "zone": "/zones/<UUID>"}
    

New Update Script

The script is named gandi-live-dns and available at GitHub under a GPLv3 License. It’s written in Python and uses requests. Missing subdomains are created if they have not yet been created. If the WAN IP of my home router is different to the IP of my subdomain records, the script updates all provided subdomains with the REST API.

Detailed descriptions and install/usage instructions are available at the README. Feel free to create an issue if there are any questions or create a pull request for updates.

 

systemd service file – start service on boot

There’s lot of discussion going on about systemd, ignoring the fact it’s already here, so let’s face it and use it.

running a nodejs server as an init service

In this case i’d like to show how to run cryptpad (install howto in previous post) as an init process with systemd.

Create a cryptpad.service file in the systemd folder. For example /etc/systemd/syste/cryptpad.service

root@cryptpad:/etc/systemd/system# vi cryptpad.service
[Unit]
Description=Cryptpad Server
After=network.target

[Service]
Type=simple
User=cryptpad
WorkingDirectory=/home/cryptpad/cryptpad
ExecStart=/usr/bin/node /home/cryptpad/cryptpad/server.js
Restart=always
 # Restart service after 10 seconds if node service crashes
 RestartSec=10
# Output to syslog
StandardOutput=syslog
StandardError=syslog
SyslogIdentifier=nodejs-cryptpad
Environment=NODE_ENV=production 

[Install]
WantedBy=multi-user.target

Reload the systemd service files.

root@cryptpad:/etc/systemd/system# systemctl daemon-reload

Start the server now with systemctl (service still works)

root@cryptpad:/etc/systemd/system# systemctl restart cryptpad
root@cryptpad:/etc/systemd/system# systemctl status cryptpad 
* cryptpad.service - Cryptpad Server
 Loaded: loaded (/etc/systemd/system/cryptpad.service; enabled; vendor preset: enabled)
 Active: active (running) since Sun 2017-07-30 13:11:21 CEST; 16s ago
 Main PID: 341 (node)
 Tasks: 10 (limit: 4915)
 CGroup: /system.slice/cryptpad.service
 `-341 /usr/bin/node /home/cryptpad/cryptpad/server.js

Jul 30 13:11:21 cryptpad systemd[1]: Started Cryptpad Server.
Jul 30 13:11:22 cryptpad nodejs-cryptpad[341]: loading rpc module...
Jul 30 13:11:22 cryptpad nodejs-cryptpad[341]: [2017-07-30T11:11:22.105Z] server available http://[::]:3000
Jul 30 13:11:22 cryptpad nodejs-cryptpad[341]: Cryptpad is customizable, see customize.dist/readme.md for details

To enable start at boot.

root@cryptpad:/etc/systemd/system# systemctl enable cryptpad
Created symlink /etc/systemd/system/multi-user.target.wants/cryptpad.service -> /etc/systemd/system/cryptpad.service.

 

Test the restart functionality:

Find out the PID of your cryptpad

root@cryptpad:/etc/systemd/system# netstat -tulpn | grep 'Active\|Proto\|node' 
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name 
tcp6 0 0 :::3000 :::* LISTEN 463/node

or

root@cryptpad:/etc/systemd/system# ps -ef | grep server.js 
cryptpad 463 1 0 Jul22 ? 00:00:01 /usr/bin/node /home/cryptpad/cryptpad/server.js

 

Kill the process by it’s PID reported above

root@cryptpad:~# kill 463

Then wait a few seconds and the process will reappear with a new PID instead.

root@cryptpad:~# ps -ef | grep server.js 
cryptpad 510 1 0 Jul22 ? 00:00:01 /usr/bin/node /home/cryptpad/cryptpad/server.js

It should also reappear after a reboot without problems.

https://scottlinux.com/2014/12/08/how-to-create-a-systemd-service-in-linux-centos-7/

https://github.com/xwiki-labs/cryptpad/issues/62

https://askubuntu.com/questions/676007/how-do-i-make-my-systemd-service-run-via-specific-user-and-start-on-boot

http://www.devdungeon.com/content/creating-systemd-service-files

https://www.axllent.org/docs/view/nodejs-service-with-systemd/

cryptpad installation on Debian Stretch

CryptPad is a zero knowledge realtime collaborative editor. Encryption carried out in your web browser protects the data from the server, the cloud and the NSA. This project uses the CKEditor Visual Editor and the ChainPad realtime engine. The secret key is stored in the URL fragment identifier which is never sent to the server but is available to javascript so by sharing the URL, you give authorization to others who want to participate.

Code is hosted at Github and the license is AGPLv3.

Collaborate in Confidence. Grow your ideas together with shared documents while Zero Knowledge technology secures your privacy; even from us.

Cryptpad combines various functionalities in a single application.

  • pad
  • paste
  • drive
  • poll
  • whiteboard
  • presentations
  • contacts

 

Cryptpad can be used with a registered user and password, or also without.

Install NodeJS v6.x on Debian 9 Stretch

Log in to your server with root to install the missing packages. (from linuxconfig.org on NodeJS install)

root@cryptpad:~# apt-get install curl git-core
root@cryptpad:~# curl -sL https://deb.nodesource.com/setup_6.x | bash -
root@cryptpad:~# apt install nodejs
root@cryptpad:~# node -v
v6.11.1

Now the requirement for nodeJS v6.x.x regarding the official install instructions should be met.

Install Cryptpad

First create a dedicated user for your cryptpad server. And add your public ssh key to authorized_keys.

root@cryptpad:~# useradd cryptpad -d /home/cryptpad/ -s /bin/bash
root@cryptpad:~# mkdir /home/cryptpad
root@cryptpad:~# chown cryptpad:cryptpad /home/cryptpad
root@cryptpad:~# passwd cryptpad
root@cryptpad:~# su cryptpad
cryptpad@cryptpad:/root$ cd
cryptpad@cryptpad:/home/cryptpad$ 
cryptpad@cryptpad:/home/cryptpad$ mkdir .ssh
cryptpad@cryptpad:/home/cryptpad$ chmod 0700 .ssh
cryptpad@cryptpad:/home/cryptpad$ cd .ssh
cryptpad@cryptpad:/home/cryptpad/.ssh$ vi authorized_keys

And now login with your newly created user and install cryptpad.

cryptpad@cryptpad:~$ git clone <this repo>
cryptpad@cryptpad:~$ cd cryptpad
cryptpad@cryptpad:~/cryptpad$ npm install
cryptpad@cryptpad:~/cryptpad$ npm install -g bower ## if necessary
cryptpad@cryptpad:~/cryptpad$ bower install

cryptpad@cryptpad:~/cryptpad$ cp config.example.js config.js

CryptPad should work with an unmodified configuration file, though there are many things which you may want to customize. Attributes in the config should have comments indicating how they are used.

cryptpad@cryptpad:~/cryptpad$ vi config.js

## Run your node

cryptpad@cryptpad:~/cryptpad$ node ./server.js 
loading rpc module...

[2017-07-30T08:58:36.817Z] server available http://[::]:3000
Cryptpad is customizable, see customize.dist/readme.md for details

Ready, your cryptpad should be available on port 3000 on your server.

Do not forget to use strong https for example with a reverse proxy e.g.: Apache SSL vHost , nginx or HAProxy SSL termination

In future post’s i’ll feature how to use a separate storage backend, put the file storage to different directory’s and have a systemd service for the nodejs application.

The Laughing Man

J.D. Salinger – The Catcher in the Rye

I figured I could get a job at a filling station somewhere, putting gas and oil in people's cars. I didn't care what kind of job it was, though. Just so people didn't know me and I didn't know anybody. I thought what I'd do was, I'd pretend I was one of those deaf-mutes. That way I wouldn't have to have any goddamn stupid useless conversations with anybody. If anybody wanted to tell me something, they'd have to write it on a piece of paper and shove it over to me. They'd get bored as hell doing that after a while, and then I'd be through with having conversations for the rest of my life. Everybody'd think I was just a poor deaf-mute bastard and they'd leave me alone. (from Chapter 25))

 

Install Wallabag 2.x on Debian Stretch

Wallabag is a self hostable application allowing you to not miss any content anymore. Click, save and read it when you can. It extracts content so that you can read it when you have time without ad’s and nagging JavaScript popups.

There are just a few steps necessary to run wallabag at your own linux box. If you do not want or do not have the time or skill to run your own Wallabag instance, they provide a service with reasonable pricing at wallabag.it.

As this is an Application which was really good with v1.x and has improved again with v2.x, feel free to spread the word about them. Their direct (and proprietary) competitor has been bought and integrated into Firefox. As the development is funded from their hosted service, give them a warm feeling and show your appreciation about creating this great piece of free software and leave them a tip on Liberapay or Bountysource if you run your own wallabag.

Self-hosted Installation

I’ll cover install on Debian Stretch, external MariaDB and Apache 2.4 as this is my most used setup and the traditional LAMP stack. I just separate it to different LXC Containers as they are running smoothly in my Proxmox host. As Database Backend SQLite and PostgreSQL are also available and NGINX or Lighty settings are covered in their documentation.

In the wallabag documentation there are some dependency’s listed which need to be installed for wallabag to work properly.

PHP version needs to be at least >= 5.6, including PHP 7 which is available in Debian Stretch with package php7.0 (7.0.19-1).

Composer is now also available in the repository and does not need to be installed manually.

root@bag:~# apt-get install apache2 libapache2-mod-php php composer git-core php-bcmath php-gd php-dompdf php-json php-mbstring php-xml php-tidy php-symfony-polyfill-iconv php-curl php-gettext php-bcmath php-mysql make zip unzip

If you think you are missing one of the dependency’s, run “php -m” to show all loaded php modules.

Next step is downloading wallabag from git and install it.

root@bag:~# cd /var/www/
root@bag:/var/www# git clone https://github.com/wallabag/wallabag.git
root@bag:/var/www# cd wallabag/
root@bag:/var/www/wallabag# make install

composer will download and install a lot of php dependency’s automatically.

When this task is finished, wallabag will ask for parameters to do the initial settings.

Settings

This parameters can be changed also later and are described here in the parameter-documentation.

I have already a MariaDB running on my network. Prepared a Wallabag database and user instead of root. The database can also be choosen to be SQlite or Postgres instead of MariaDB (MySQL). Detailed Settings for the database options.

I also suggest to change the secret to a random generated one.

As my instance is a private one, i do not want random users to create their own users and set fosuser_registration to false.

Creating the "app/config/parameters.yml" file
Some parameters are missing. Please provide them.
database_driver (pdo_sqlite): pdo_mysql
database_host (127.0.0.1): 192.168.100.201
database_port (null): 3306
database_name (symfony): wallabag
database_user (root): wallabag
database_password (null): trustno1
database_path ('%kernel.root_dir%/../data/db/wallabag.sqlite'): 
database_table_prefix (wallabag_): 
database_socket (null): 
database_charset (utf8): 
mailer_transport (smtp): 
mailer_host (127.0.0.1): 
mailer_user (null): 
mailer_password (null): 
locale (en): 
secret (ovmpmAWXRCabNlMgzlzFXDYmCFfzGv): random-generated-secret-instead
twofactor_auth (true): 
twofactor_sender (no-reply@wallabag.org): 
fosuser_registration (true): false
fosuser_confirmation (true): 
from_email (no-reply@wallabag.org): 
rss_limit (50): 
rabbitmq_host (localhost): 
rabbitmq_port (5672): 
rabbitmq_user (guest): 
rabbitmq_password (guest): 
rabbitmq_prefetch_count (10): 
redis_scheme (tcp): tcp
redis_host (localhost): 192.168.100.213
redis_port (6379): 
redis_path (null): 
redis_password (null): redis-requirepass-password
sites_credentials ({ }):

I use a Redis instead of RabbitMQ, but it’s not necessary to use it for daily operation. It becomes handy when asynchronous import from other read-it-later apps is necessary.

Step 1 of 4. Checking system requirements.
+------------------------+--------+----------------+
| Checked | Status | Recommendation |
+------------------------+--------+----------------+
| PDO Driver (pdo_mysql) | OK! | |
| Database connection | OK! | |
| Database version | OK! | |
| curl_exec | OK! | |
| curl_multi_init | OK! | |
+------------------------+--------+----------------+
Success! Your system can run wallabag properly.

Step 2 of 4. Setting up database.
It appears that your database already exists. Would you like to reset it? (y/N)
Seems like your database contains schema. Do you want to reset it? (y/N)y
Droping schema and creating schema
Clearing the cache

Step 3 of 4. Administration setup.
Would you like to create a new admin user (recommended) ? (Y/n)Y
Username (default: wallabag) :admin
Password (default: wallabag) : trustno1
Email:wallabag@mydomain.tld

Step 4 of 4. Config setup.

wallabag has been successfully installed.

 

root@bag:/var/www/wallabag# php bin/console server:run --env=prod
 [ERROR] Running PHP built-in server in production environment is NOT recommended! 

 [OK] Server running on http://127.0.0.1:8000 

// Quit the server with CONTROL-C.

 

 

So let’s connect from the client pc to the server where wallabag is running and forward the port to the client to test the new wallabag installation for the first time.

cave@laptop:~#ssh -L 8080:localhost:8000 root@bag

Enter your browser and navigate to localhost:8080 to see this:

Apache vHost

Apache needs rights access to the wallabag folder in /var/wwwas described in the documentation.

root@bag:/var/www# chown -R www-data:www-data wallabag/

 

The documentation also share’s a ready to take vHost for apache (also NGINX or Lighty are available) to place in your sites-available directory. Update the ServerName and ServerAlias directive to your needs, disable the default vHost and enable the wallabag vHost. Restart apache afterwards.

root@bag:/var/www# cd /etc/apache2/sites-available/
root@bag:/etc/apache2/sites-available# vi 100-wallabag.conf
root@bag:/etc/apache2/sites-available# a2dissite 000-default.conf 
Site 000-default disabled.
To activate the new configuration, you need to run:
 systemctl reload apache2
root@bag:/etc/apache2/sites-available# a2ensite 100-wallabag.conf 
Enabling site 100-wallabag.
To activate the new configuration, you need to run:
 systemctl reload apache2
root@bag:/etc/apache2/sites-available# service apache2 restart

 

vHost Example for Apache 2.4

<VirtualHost *:80>
 ServerName bag.mydomain.tld
 ServerAlias bag.mydomain.lan

DocumentRoot /var/www/wallabag/web
 <Directory /var/www/wallabag/web>
   Require all granted

<IfModule mod_rewrite.c>
 Options -MultiViews
 RewriteEngine On
 RewriteCond %{REQUEST_FILENAME} !-f
 RewriteRule ^(.*)$ app.php [QSA,L]
 </IfModule>
 </Directory>

# uncomment the following lines if you install assets as symlinks
 # or run into problems when compiling LESS/Sass/CoffeScript assets
 # <Directory /var/www/wallabag>
 # Options FollowSymlinks
 # </Directory>

# optionally disable the RewriteEngine for the asset directories
 # which will allow apache to simply reply with a 404 when files are
 # not found instead of passing the request into the full symfony stack
 <Directory /var/www/wallabag/web/bundles>
 <IfModule mod_rewrite.c>
 RewriteEngine Off
 </IfModule>
 </Directory>
 ErrorLog /var/log/apache2/wallabag_error.log
 CustomLog /var/log/apache2/wallabag_access.log combined
</VirtualHost>

Apache Module ReWrite

Before we can use our fresh and new wallabag instance, we need to enable mod_rewrite for Apache.

root@bag:/etc/apache2/sites-available# a2enmod rewrite 
Enabling module rewrite.
To activate the new configuration, you need to run:
 systemctl restart apache2
root@bag:/etc/apache2/sites-available# service apache2 restart

 

Don’t forget to use https with a SSL enabled vHost or with HAProxy SSL Termination in front of your WebServer for HTTPS.

 

There is an AddOn available for Firefox and Chrome.

And for your mobile Phone there is the Wallabag App available on F-Droid and Google Play Store.

 

https://freedif.org/want-to-read-it-later-save-it-with-wallabag/

local apt-mirror for debian stretch

If you have more debian systems in your LAN, it makes sense to use apt-mirror or apt-cacher-ng to run your own repository. This help’s to speed up your installs and upgrades and save’s bandwidth when you need it.

apt-mirror downloads all specified repository’s and mirrors them locally. So you will have them available in case of problems with your ISP connection and at greater speeds. Disadvantage: It takes a lot of space to have all packages stored. You probably only need a fraction of the packages downloaded.

Apt-Cacher NG is a caching proxy for downloading packages from Debian-style software repositories.

Often ISP’s or hosting provider already do this for their customers. For example Hetzner has instructions on his wiki, to use their apt-mirror.

I went for the apt-mirror in this tutorial and also use it for my home-lab.

There are only two packages needed without complicate configuration. apt-mirror and apache

root@apt:~# apt-get install apt-mirror

Then edit the mirror.list config file.

root@apt:~# vi /etc/apt/mirror.list

 

In my case i wanted to store the repository on an external disk which was mounted on /mnt/apt-mirror. So it’s necessary to uncomment and edit the path related settings to enable a different path.

root@apt:~# cat /etc/apt/mirror.list 
############# config ##################
#
# set base_path /var/spool/apt-mirror
set base_path /mnt/apt-mirror
#
 set mirror_path $base_path/mirror
 set skel_path $base_path/skel
 set var_path $base_path/var
 set cleanscript $var_path/clean.sh
# set defaultarch <running host architecture>
 set postmirror_script $var_path/postmirror.sh
# set run_postmirror 0
set nthreads 20
set limit_rate 100k
set _tilde 0
#
############# end config ##############

deb-amd64 http://ftp.at.debian.org/debian stretch main contrib non-free
deb-amd64 http://ftp.at.debian.org/debian stretch-updates main contrib non-free

clean http://ftp.at.debian.org/debian

The settings for limit_rate and nthreads are multiplied and define how much maximum bandwidth can be used by apt-mirror to download the packages.

20 threads x 100 kByte/s => 2000 kB/s => 16 Mbit/s

I want to mirror the distribution stretch only,  stretch(stable) and stretch-updates repository with components main, contrib and non-free. It’s a common recommendation to not mirror security updates. They can be mirrored but should be downloaded directly.

if you do apt-pinning, you need to skip the clean afterwards.

Run apt-mirror to initally download the archive.

root@apt:~# apt-mirror
Downloading 60 index files using 20 threads...
Begin time: Fri Jul 21 20:36:55 2017
[20]... [19]... [18]... [17]... [16]... [15]...

This will take some hours until everything is fetched from the source repository.

When the download is finished, we will make the repository available via http to our other Debian hosts.

root@apt:~# apt-get install apache2

Apache should create a default vhost in /etc/apache2/sites-available with a DocumentRoot at /var/www/html

root@apt:/etc/apache2/sites-available# ls
000-default.conf default-ssl.conf

So delete the index.html and place a link which points to your local repository.

ln -s /mnt/apt-mirror/mirror/ftp.at.debian.org/debian /var/www/html/debian

Cron the step “apt-mirror” at least once a day, to fetch all updates to your repository. For example in the night, where your Internet Connection should be less in use.

That’s it. Now you can use your own Repository in your LAN. Just update the sources.list on your hosts and replace with your hostname or IP.

As there are no numbers available for the size of a single release and amd64-only, i share here what i have for the size of my mirror.

root@apt:/mnt/apt-mirror# du -hd1
61G ./mirror
145M ./var
138M ./skel
61G .

 

upgrade Debian Jessie to Stretch

Hooray, Debian 9 Codename Stretch is released.

How to upgrade from Debian 8 Jessie to Debian 9 Stretch safely. If you haven’t used backports or apt-pinning, this should be a no-brainer.

Have a look at your Debian Version before and after the upgrade.

~# cat /etc/debian_version 
8.8
~# cat /etc/os-release
PRETTY_NAME="Debian GNU/Linux 8 (jessie)"
NAME="Debian GNU/Linux"
VERSION_ID="8"
VERSION="8 (jessie)"
ID=debian
HOME_URL="http://www.debian.org/"
SUPPORT_URL="http://www.debian.org/support"
BUG_REPORT_URL="https://bugs.debian.org/

 

Update your system before doing anything.

apt-get update
apt-get upgrade
apt-get dist-upgrade

 

Replace jessie with stretch in /etc/apt/sources.list

deb http://ftp.at.debian.org/debian jessie main contrib

deb http://ftp.at.debian.org/debian jessie-updates main contrib

deb http://security.debian.org jessie/updates main contrib

to

deb http://ftp.at.debian.org/debian stretch main contrib

deb http://ftp.at.debian.org/debian stretch-updates main contrib

deb http://security.debian.org stretch/updates main contrib

 

Do the upgrade!

apt-get update
apt-get upgrade
apt-get dist-upgrade

 

Check your Debian Version also after the upgrade

~# cat /etc/debian_version 
9.0
~# cat /proc/version
Linux version 4.4.67-1-pve (root@nora) (gcc version 4.9.2 (Debian 4.9.2-10) ) #1 SMP PVE 4.4.67-89 (Thu, 8 Jun 2017 15:12:11 +0200)
root@cdn:~# cat /etc/os-release
PRETTY_NAME="Debian GNU/Linux 9 (stretch)"
NAME="Debian GNU/Linux"
VERSION_ID="9"
VERSION="9 (stretch)"
ID=debian
HOME_URL="https://www.debian.org/"
SUPPORT_URL="https://www.debian.org/support"
BUG_REPORT_URL="https://bugs.debian.org/"

voila, looks fine to me

https://linuxconfig.org/how-to-upgrade-debian-8-jessie-to-debian-9-stretch

https://linuxconfig.org/check-what-debian-version-you-are-running-on-your-linux-system

Dynamic DNS – update domain record yourself

For some silly ‘reasons‘ ISP’s tend to hand out only dynamic IPv4 addresses to their customers.

  • to discourage the user to run a server at home
  • to sell overpriced business contracts with symmetric bandwidth and static IPv4 and other additional useless goodies which are missing in the non-business contract

But the dynamic IPv4 is not a big deal. There are lot’s of DynDNS providers which have focused to solve this problem with a floating IP address at home.

There are some which offer this service for free with some caveats.

  • nagging its users to make a paid subscription
  • annoying its users with advertisements or spam
  • cancelling hosts or accounts after a short period of non-usage
  • hiding the few free features almost undiscoverably between a ton of commercial-only features
  • no-ip.com – works good, monthly nagging
  • dyn.com –  one of the bigger commercial DynDNS providers

The more interresting providers:

  • he.net – Hurricane Electric – untested by me – though they have a good IPv6 Tunnel broker service
  • freedns.afraid.org – this is the one which i use/used until now. Just working without any hassle. I’d recommend this one.
  • nsupdate.info – they run an open DynDNS service and also develop the software under a BSD-3 License. Anyone can self-host a Dynamic DNS Provider with the full stack of services. Their documentation is nice. And the project is available here:  github.com/nsupdate-info/nsupdate.info. Though it’s tempting to self-host the service, i guess it’s too much just for our use case here. Maybe i’ll cover this topic in the future.

The major services are also prepared to use with DD-WRT / LEDE / openWRT to update the Domain record if the WAN IP has changed.

The disadvantage of the DynDNS Provider above is they give you a subdomain which point’s to your home IP address, but the domain is not possible to choose. So the domain is mostly already known as a DynDNS Domain and often filtered or blocked by proxies.

Gandi DNS API Update

As all of my domains are hosted at Gandi i wanted to keep my domain name also for my home equipment like dynamic.cave_at_home.tld. Which is not so obvious a dynamic updated Domain Record.

Gandi provides remotes APIs using the XML-RPC protocol making it easy to build third party applications to manage your Gandi resources (domains, contacts, hosting, etc). So we can use this API if the IP has changed to update our Domain Record without any third party.

I have the following setup at home. My Modem is connected to my ISP and provides an IPv4 Address via DHCP to my Router’s WAN Port. I want to access my Webserver from the outside via webserver.domain.tld. To achieve this, i run a separate Linux Container with only a Python stack and an update script which accesses the Gandi API.

The Gandi API terminates at: https://rpc.gandi.net/xmlrpc/ . A complete documentation is available at: http://doc.rpc.gandi.net/index.html but we are only interrested in updating a single domain record in our zone file. => http://doc.rpc.gandi.net/domain/reference.html#domain.zone.update

A small search on GitHub for “gandi” and “dyndns” shows 3 repository’s which have already done this. Happily they are developed and released with an open source license so we can reuse and improve them.

I have decided to use the one from jasontbradshaw because it supports multiple subdomains in a single config file and seems to do what i need. I forked it and repaired a problem with the TTL, which got overwritten everytime with 1080 seconds (3 hours). The project is obviously pretty active on GitHub: gandi-dyndns/network with heavy forking. Which is also a plus.

To use it, we need first to enable the API at Gandi for our user. Visit https://www.gandi.net/admin/api_key and apply for the production API key by following their directions.

Create a new Domain Record in your Zone File by adding a new line.

  1. Click on “Edit the Zone” under “Zone files”.
  2. Click “Create a new version”.
  3. Click “Add”.
  4. Change the values to:
Field Value
Type A
TTL 600
Name dynamic
Value 127.0.0.1
  1. Click “Submit”.
  2. Click “Use this version”.
  3. Click “Submit”.

Edit the config files as described here: script-configuration.

My config looks like this:

root@dyndns:~/gandi-dyndns-master# cat config.json 
{
 "api_key": "cP9eAqobtvxkTn6hz4wCuRLE",
 "domains": {
 "domain.tld": [ "dynamic", "sub1", "sub2"]
 }
}

And after some testing, it is CRON’ed every 5 minutes and checks for updates.

 

root@dyndns:~/gandi-dyndns-master# crontab -l | tail -n 2
*/5 * * * * /root/gandi-dyndns-master/gandi_dyndns.py >/dev/null 2>&1

A check back at Gandi shows, a new Zone File has been created and the defined records got updated well. The small TTL time ensures a fast flush from the wrong IP from the DNS caches.

Now all my subdomains are pointing all the time to the WAN IP of my Home Router. The Router Forwards the ports 80 and 443 to my Webserver.

HAProxy multi domain SSL termination

HAProxy is a free, very fast and reliable solution offering high availability, load balancing, and proxying for TCP and HTTP-based applications. It is suited also to handle SSL Termination for other services. So the WebServer (Apache/NGINX/any) can focus on the content, and the crypto Stuff is offloaded to HAProxy.

HAProxy binds to Port 80 and Port 443 and redirects the traffic depending of the requested URL to the WebServer Backends.

For Debian Strech there exists a package with the latest stable version 1.7 available for install.

apt-get update && apt-get install haproxy

The settings for HAProxy are edited in a single file.

root@haproxy:/etc/haproxy# vi haproxy.cfg

In the global section some settings should be adjusted to improve security. The Applied Crypto Hardening Handbook from https://bettercrypto.org/ has a section for HAProxy settings. Though they could be better, so we do not need IE6.0 support. This browser version should not be used anyways.

Here are the updated Ciphers which have also been used in my Apache post before.

 ssl-default-bind-ciphers DHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES256-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-SHA384:DHE-RSA-CAMELLIA256-SHA:DHE-RSA-AES256-SHA:ECDHE-RSA-AES256-SHA 
 ssl-default-bind-options no-sslv3 no-tls-tickets #disable SSLv3
 tune.ssl.default-dh-param 4096 #tune DH to 4096

This disables the use of ciphers/protocol combination without perfect forward secrecy. Clients that do not support Forward Secrecy are excluded when determining support for it.

The tune.ssl.default-dh-param sets the maximum size of the Diffie-Hellman parameters used for generating the ephemeral/temporary Diffie-Hellman key in case of DHE key exchange. The final size will try to match the size of the server’s RSA key (e.g,
a 2048 bits temporary DH key for a 2048 bits RSA key), but will not exceed
this maximum value. This value is not used if static Diffie-Hellman parameters are supplied either directly in the certificate file or by using the ssl-dh-param-file parameter.

Custom static parameters are known to be more secure and therefore their use is recommended. Custom DH parameters may be generated by using the OpenSSL command “openssl dhparam <size>”, where size should be at least 2048, as 1024-bit DH parameters should not be considered secure anymore.

For the DH-Params we will generate a 4096 bit file. This may take some time depending  cpu power.

openssl dhparam -out /etc/haproxy/certs/dhparams.pem 4096

The setting to point haproxy is the following ssl-dh-param-file

ssl-dh-param-file /etc/haproxy/certs/dhparams.pem

HAProxy needs the SSL Certificate and Key File in a single PEM File. A self signed Certificate is working fine for testing/production and can be replaced with a Let’s Encrypt signed certificate later on.

root@haproxy:/etc/ssl# mkdir haproxy
root@haproxy:/etc/ssl# openssl req -x509 -nodes -sha256 -days 365 -newkey rsa:4096 -keyout /etc/ssl/private/domain.tld.key -out /etc/ssl/certs/domain.tld.pem 
root@haproxy:/etc/ssl# cat /etc/ssl/certs/domain.tld.pem /etc/ssl/domain.tld.key > /etc/ssl/haproxy/domain.tld.pem

The frontend needs to be added to the config file.

frontend frontend_public
 bind *:80
#Add multiple certificates, one for each domain.tld
 bind *:443 ssl crt /etc/ssl/haproxy/sub1.domain.tld.pem crt /etc/ssl/haproxy/sub2.domain.tld.pem crt /etc/ssl/haproxy/sub3.domain2.tld.pem 
 
 mode http
#redirect http to https
 redirect scheme https code 301 if !{ ssl_fc }

#:::ACL:::Define ACL for each Subdomain to terminate
 acl webserver1-acl req_ssl_sni -i sub1.domain.tld
 acl webserver2-acl req_ssl_sni -i sub2.domain.tld
 acl webserver3-acl req_ssl_sni -i sub3.domain2.tld

#:::BACKEND:::Use Backend Section
 use_backend webserver1-backend if webserver1-acl
 use_backend webserver2-backend if webserver2-acl
 use_backend webserver3-backend if webserver3-acl

To enable SNI functionality, the crt setting must be specified for each certificate/domain combination. And the certificate is provided to the user depending of the URL.

An AccessControlList is created for each subdomain which is terminated with req_ssl_sni at HAProxy. And for each ACL a backend is defined.

Also the backend section needs to be added for each webserver/domain combination.

backend webserver1-backend
 mode http
 server webserver1 192.168.1.101:80 check
 http-request set-header X-Forwarded-Port %[dst_port]
 http-request add-header X-Forwarded-Proto https if { ssl_fc }

backend webserver2-backend
 mode http
 server webserver2 192.168.1.102:80 check
 http-request set-header X-Forwarded-Port %[dst_port]
 http-request add-header X-Forwarded-Proto https if { ssl_fc }

backend webserver3-backend
 mode http
 server webserver3 192.168.1.103:80 check
 http-request set-header X-Forwarded-Port %[dst_port]
 http-request add-header X-Forwarded-Proto https if { ssl_fc }

With these settings a HAProxy setup with three different backend WebServers should be working whithout any problems. SSL Termination is done at HAProxy and the WebServers focus serving content only.

The next time i’ll show how to install Let’s Encrypt certificates without any HAProxy downtime.