Docker – First steps and basic commands

As i have to use docker, i decided to write this (mostly for myself) to be able to look up useful docker things.

I got them from https://docker-curriculum.com/ – so visit there for more details!

Definitions:

Images: The blueprint and environment for containers

Containers: Are created from images with run and are the processes of the images (so to speak)

Commands:

#install container
docker pull $containername

#run container
docker run $container
docker run -d $container #detached
docker run -P $container #attach internal ports to random (outside) ports

##mostly used:
docker run -d -P --name $customname $container

#run container with terminal
docker run -it $container sh

#show containers
docker ps -a

#show used ports
docker port $customname

#stop containers
docker stop $customname #or container id

#delete exited containers (caution)
docker container prune

#show local images
docker images

X-forwarding with XPRA

I have a server, as some maybe know. I am using mosh for ssh connections, and as a “normal” admin i was using vnc for remote graphical connections.

The problem: Its shit.

Maybe i was using it wrong all those years, but my client informed me quite often that an 8 character password was the best the server could do.

However – i was running a graphical programm for quite some time, when i was notified that i was disconnected because of too many connection attempts.

I had to kill the vnc-server and the running programm with it – and restart it.

Apparantly i am on some kind of list now, because i had too many failed attempts again, a short while later.

Thats when i thought about ssh with x forwarding.

Problem: The program stops when the pipe is broken.

Solution: Xpra ( https://xpra.org/ )

Its a program to forward an X-Display and you can detach and reattach whenever you want.

From the site:

xpra start ssh:SERVERHOSTNAME --start=xterm

To start a terminal (for test purposes, but you can start firefox as well, if you want)

xpra attach ssh:serverhostname

To reattach a running window.

Quite handy!

How to use aurutils

I dont want to search for my own post just to go to reddit and hope, so:

Search for packages:

aur search $yourpackagename

Install Packages:

aur sync $yourpackagename
sudo pacman -S $yourpackagename

Updating aur:

aur sync -u
sudo pacman -Syu (Updates all packages)

Easy enough… So far…

Fail2Ban – unban ip

I “secured” my server with fail2ban, a tool to ban ips when a ssh login failed (in my case anyway)…

So… i have friends who have accounts on the server as well.

You can extrapolate for yourself, but the essence is that i have to unban ips on a regular basis.

So here is the line to unban an ip:

sudo fail2ban-client set sshd unbanip $ipoffriend

sshd is my jailname and personalised. The ipoffriend has to be the ip of the friend.

Broken HDD – mount as Readonly in fstab

As mentioned before, i have a semi-broken hdd.

When i try to read some specific files i get read-errors and it unmounts. I tried many things but i am at a loss…. for the moment. as its a fairly big hdd (3TB) and i cant backup all files at once i want to mount it readonly, for the moment, soi cant save stuff on it accidentaly.

I know its no long-term solution, but whatever – its a quick fix.

To make it mount readonly i edit the /etc/fstab.

Before that i have to find out the uuid of the hdd with:

sudo blkid

In my case its:

/dev/sdd1: UUID="46f8ba4c-c330-4b9f-8cd5-1bee9e2961d7" UUID_SUB="dcd7f171-9250-4442-b6a8-0b175475f8c3" TYPE="btrfs" PARTUUID="96b59094-606e-4b2a-875f-7471b69cf066"

Now i edit the fstab and add the following line:

UUID=46f8ba4c-c330-4b9f-8cd5-1bee9e2961d7 /media/my_mount_dir btrfs noauto,ro,users 0 0

As i dont want to have it automounted, i put in “noauto”.

At last i have to reload:

sudo mount -a

Done!

git error: object file $xyz is empty – Howto fix it! (tl/dr: use “git-repair”)

I just wanted to update my git repo with a pull and got the following error:

error: object file .git/objects/bf/75a18abe956a50b9ffbbbaed11d896a42dc278 is empty

I dont know why i got this error, but the fix is described at stackoverflow:

https://stackoverflow.com/questions/11706215/how-to-fix-git-error-object-file-is-empty

The solution wasnt in the first answer, or… it was, but more complicated than necessary, which is the reason i write this blogentry…

The solution was for me:

find .git/objects/ -type f -empty | xargs rm
git fetch -p
git fsck --full

Which worked, but now i had a different error:

error: refs/heads/master: invalid sha1 pointer bf75a18abe956a50b9ffbbbaed11d896a42dc278

Nice!

So after an extended google search (about 20 seconds) i found the (real) solution:

git-repair

So i installed it with sudo apt install git-repair and let it run in the terminal, in my git-folder.

After waiting for an eternity (about 20 minutes on my notebook) git-repair did its magic and my problem was still there.

Well… it said i should run it with the “–force” option. I wish it would have suggested that before i waited for 20 minutes…

The message was:

Gogs: Repository owner does not exist                                                                                                                          
fatal: Could not read from remote repository.                                                                                                                  
                                                                                                                                                               
Please make sure you have the correct access rights                                                                                                            
and the repository exists.                                                                                                                                     
Trying to recover missing objects from remote origin.                                                                                                          
Gogs: Repository owner does not exist                                                                                                                          
fatal: Could not read from remote repository.                                                                                                                  
                                                                                                                                                               
Please make sure you have the correct access rights                                                                                                            
and the repository exists.                                                                                                                                     
5 missing objects could not be recovered!                                                                                                                      
To force a recovery to a usable state, retry with the --force parameter.

I made sure the repo still exists (im not the only contributor) and my user still exists, and that i have internet, and that the server is running, and that i can connect to the repo.

After that i typed git-repair --force and waited another eternity (thats defined to 20 minutes now, btw…)

After THAT… It worked… Kind of… The git is fixed now and i still have to figure out why i got the error in the first place and why git had problems connecting to the git-server…

So… if you have any problem with git: give git-repair a chance (and backup your git-folder before you do it – or dont… i didnt because im lazy and i could have cloned the repo from my git server anytime – but thats effort…)

TL/DR: sudo apt install git-repair && git-repair --force in your git-folder and be prepared to do something else in the eternity that it takes.

(Installing obviosly only if you havent installed it – but if you have installed it, you probably already used it, so you wouldnt read this…)

SSL on Apache2 (while using wordpress)

Sooo… I fixed it and i know what went wrong with ssl – and it was actually not that hard to fix.

To cut things short: If you are using wordpress and change the domain (or subdomain) in your vhosts, do it in the setting-menu of wordpress BEFORE. Else everything will break down and if you are trying to deploy ssl at the same time it will break down even more and you will get the most interesting error messages in your browser and much more other ambigious stuff…

But lets begin at the start.

As i am hosting some websites on my server with one ip, i have to use vhosts. If they are properly configured, its no problem – but my friend bought a new domain and wanted to migrate the wordpress-sites to the new domain and get https (so ssl encryption) for them for obvious reasons.

I used LetsEncrypt (https://wiki.debian.org/LetsEncrypt) because its free and relatively easy (very easy, to be honest). I know there were some fuckups in the past with it, but lets be plain: All i want is, that the little icon in your browser shows a green lock (or whatever symbol is used in your browser). Thats it. Its okay for me that the security isnt perfect, because i only care about the fact that the user isnt getting a “this site might be insecure” message or even worse, that the browser decides to not show the site at all. I probably have to apologize for my long sentences at this point – but i am german and as a german i can tell you: these sentences arent long at all… 😉

Back to topic: I use Letsencrypt and the certbot (“sudo apt install certbot” on debian) for getting my green lock in the browser.

The first thing to note:

For every domain there should be a certificate. (Subdomains excluded, obviously.)

Do not use the same certificate for different Domains!

The next thing to note:

Configure your vhosts (and webserver) beforehand – certbot needs all the sites you want to certify to be reachable.

If thats done, you can create your certificate with certbot:

sudo certbot --apache -d yourdomain.xyz -d www.yourdomain.xyz

All Subdomains should be in there (remember: www is a subdomain).

If you get another subdomain, you can expand the currently used certificate by ADDING the new subdomain to your certbot-line. Be careful to have all your existing domains in the line.

(You will then get prompted if you want to expand the certificate – thats what you want, then)

Anyway – You probably will get prompted if you want certbot change settings of your webserver. I dont recommend it, because it broke stuff – at least for me.

After you have run certbot, your vhosts will have gotten updated, as well.

If you encounter a problem, check your vhosts and if you chane something there, remember to reload or restart the server.

Here is my (slightly modified) conf for this site.

<VirtualHost *:80>
        ServerAdmin me@me.best
        ServerName diemo.best
        ServerAlias www.diemo.best
        DocumentRoot /var/www/mylocation
        <Directory /var/www/mylocation>
                Options Indexes FollowSymLinks MultiViews
                AllowOverride All
                Require all granted
        </Directory>
        ErrorLog ${APACHE_LOG_DIR}/error.log
        # Possible values include: debug, info, notice, warn, error, crit, alert, emerg.
        LogLevel warn
       CustomLog ${APACHE_LOG_DIR}/access.log combined
RewriteEngine on
RewriteCond %{SERVER_NAME} =www.diemo.best [OR]
RewriteCond %{SERVER_NAME} =diemo.best
RewriteRule ^ https://%{SERVER_NAME}%{REQUEST_URI} [END,NE,R=permanent]
</VirtualHost>
<VirtualHost *:443>
        ServerAdmin me@me.best
        ServerName diemo.best
        ServerAlias www.diemo.best
        DocumentRoot /var/www/mylocation
        <Directory /var/www/mylocation>
                Options Indexes FollowSymLinks MultiViews
                AllowOverride All
                Require all granted
        </Directory>
        ErrorLog ${APACHE_LOG_DIR}/error.log
        # Possible values include: debug, info, notice, warn, error, crit, alert, emerg.
        LogLevel warn
        CustomLog ${APACHE_LOG_DIR}/access.log combined
Include /etc/letsencrypt/options-ssl-apache.conf
SSLCertificateFile /etc/letsencrypt/live/diemo.best/fullchain.pem
SSLCertificateKeyFile /etc/letsencrypt/live/diemo.best/privkey.pem
</VirtualHost>

Notice: Lines 34,35 and 36 were added by certbot – you should have the rest ready when you run certbot. (You’ll probably get an ssl-error but thats okay.)

Congratulations! You should now have working ssl.

P.S.: The certificates have to be updated every 90 days. Maybe i will write a post about that later.

Copying a Website with wget

Today I was asked to download an old website for archiving purposes.

I decided to use wget.

The command is:

wget --recursive --no-clobber --page-requisites --convert-links --html-extension --no-parent  http://www.domain.xyz

The function of the arguments is as follows:

–recursive -> Kinda obvious… follow links on the website to download more than just the index-page

–no-clobber -> do not download files that are already there.

–page-requisites -> download everything needed for displaying the page

–convert-links -> convert the links from the original to the now local copy (if you dont do that, clicking on a link will get you to the original site on the server…)

–html-extension -> converts other extensions to html, or in other words: makes remote scripts (visiter-counter for example) work on your local copy

–no-parent -> is used to tell wget not to follow links outside of the given domain (for example facebook-buttons etc.) only download subpaths of the given domain.

Thats it, basically… Easy… 😉

Doing a Backup to a Remote-Server with Borgbackup

As i am paranoid, i wanted to do a backup of my server – as implicated in the wordpress-setup howtos i have some instances of wp on the server, as well as some files and gameserver.

Naturally i want to have a backup. The catch: I dont have local space, so i have to do it remote.

I know borgbackup for some years now, and so i used it. (https://borgbackup.readthedocs.io/en/stable/quickstart.html#)

It is mainly straight-forward, assuming that you can connect via ssh.

To do my backup i used the following commands which i try to explain down below:

First i had to create a new folder on my backup-destination-server, in this case $backupfolder. After that i used:

borg init –encryption=repokey myuser@mydestinationserver:/path/to/my/hdd/for/backups/$backupfolder

sudo borg create –stats –compression zstd,22 –progress myuser@mydestinationserver:/path/to/my/hdd/for/backups/$backupfolder::archivename /home/ /var/www/ /etc/apache2/

The init is needed to create the repo on the remote sever. You can use any location you want, but MAKE SURE you have enough empty space there. You will be asked for a repository-password that you shouldnt forget if you want to restore the data in the future.

Next i started the backup. the compression is set to high to use less bandwidth. The –stats will be shown after borgbackup has finished, while –progress is updated in realtime.

The archivename is just a name for the archive in the repository. they can be re-used or you can let them rot forever on your server. Mine was “june”, for example – because its june. My next backup will be in july – and the name will be “july”. Next year the archive will be overwritten.

The three folders at the and are my local folders that are being backed up.

Thats it!

Changing the maximum Upload-Filesize in WordPress

A friend of mine (and myself) wanted to upload some files – mainly images… and the maximum filesize was 2MB. Thats a bummer…

So i went to https://www.wpbeginner.com/wp-tutorials/how-to-increase-the-maximum-file-upload-size-in-wordpress/ to change the upload-size.

Foreshadowing: Nothing worked.

The trick was to edit the global php.ini, not to create a local one. At least the tutorial got me to get the right idea – thanks!

The global php.ini is (for me at least) in /etc/php/7.3/apache2/php.ini.

It is logical to change (at least) the version-number if you are reading this in the future.

Another hint for finding the location of your php.ini is typing php --ini in your terminal.

In your ini you can search for “upload” and get to the right section called “File Uploads”.

There you change “n” to your desired value: upload_max_filesize = nM ; n ε ℕ

In my case i changed it to 64M and got… 8M as maximum. I guess in my theme there is some other max-value in place that is at 8MB. As 8 MB is plenty at the moment, i dont care, and i will tackle the problem when either i need to be able to upload bigger files or my friend complains again…