Wednesday, July 10, 2019

Raspberry Pi 4 heatsink testing

With all the talk of how hot the Raspberry Pi 4 runs I thought I’d do some testing to see how different heatsinks perform. I had a few heatsinks at hand in my parts collection and ordered one specific heatsink for testing and set out to determine how effective they are. According to this page thermal throttling of the CPU happens between 80-85° and above 85° the GPU is also clocked down so let's see how far below that point we can stay with various heatsinks.


The Small and Medium heatsinks are this product from Amazon, I cut one of them to different lengths to provide similar dimensions to other heatsinks available for Raspberry Pis. Dimensions of the 2 sizes I cut:
Small Heatsink: 22mm x 24mm x 6mm
Medium Heatsink: 22mm x 45mm x 6mm

The heatsink with fan is the larger of the two heatsinks from an Intel quad port gigabit network card that I had sitting in my parts box, it’s quite thin, so I coupled it with 40 mm fan using some custom designed and 3D printed brackets that the fan screws to, the brackets hold onto the heatsink with friction, for such a hacky job this setup performed the best.
Heatsink with 40mm Fan: 34.5mm x 34.5mm x 6mm (22mm tall with fan)



The large heatsink is one I ordered from RS, it's quite large (relative to a Raspberry Pi) and heavy, I had to cut the tabs off with a hacksaw as they prevented it from sitting on the SoC.
Large Heatsink: 37.5mm x 37.5mm x 33mm

To get the Average Idle Temp I let the Pi settle for at least an hour with the heatsinks attached then recorded the temperature every 1-2 seconds for 60 seconds.

To load up all cores I used a simple one line script that I found on Stack Overflow, it launches 4 instances of dd copying /dev/zero to /dev/null, this creates enough load to use 100% of the CPU and as a result pushes the temperature up, I know its not ideal, but its a quick solution that does what I required. The script I used is below, simply paste in a terminal window and press enter, to exit the script and kill the dd instances just press enter.

fulload() { dd if=/dev/zero of=/dev/null | dd if=/dev/zero of=/dev/null | dd if=/dev/zero of=/dev/null | dd if=/dev/zero of=/dev/null & }; fulload; read; killall dd

The Time till cap at 100% is the time from when I launched the dd instances till the first time the command "vcgencmd get_throttled" shows the CPU has been frequency capped. I was logging the CPU clock speed values from /sys/devices/system/cpu/cpu0/cpufreq/scaling_cur_freq and vcgencmd measure_clock arm and interestingly when frequency capping was reported neither of the values dropped below 1500MHz.

The below tests were done at an ambient room temperature of 18-20° Celsius.

Description Average Idle Temp Max Temp Time till cap at 100% load
No Heatsink 59.95°C 82°C 3 minutes and 22 seconds
Small Heatsink 55.94°C 82°C 14 minutes and 15 seconds
Medium Heatsink 54.41°C 81°C 15 minutes and 39 seconds
Large Heatsink 45.55°C 71°C Never, ran for 2.5 hours averaging 67.97°C
Heatsink with 40mm Fan 34.75°C 46°C Never, ran for 2.5 hours averaging 43.03°C

From the above results it's quite obvious the heatsink and fan performed the best, I think I'll be going with the fan option as even the large heatsink is getting close to the 80° mark.

I'm keen to test some other readily available and preferably cheap heatsinks if anyone has some suggestions feel free to hit me up.


Tuesday, August 28, 2018

Fail2ban and Home Assistant

Following on from my earlier post I wanted to setup fail2ban on my Home Assistant server. This server is internet facing so I can do things like turn my air conditioner on and off remotely, it's configured for ssl and uses password authentication but I thought it would be prudent to add an extra layer of security.

First thing is Home Assistant needs to be configured to log invalid authentications, logging configuration needs to be modified in /home/homeassistant/.homeassistant/configuration.yaml:
$ nano /home/homeassistant/.homeassistant/configuration.yaml
Add the below lines or modify the existing logger configuration:
logger:
  default: critical
  logs:
    homeassistant.components.http.ban: warning
Restart Home Assistant
$ sudo su -
# systemctl restart home-assistant
Now tail Home Assistants log file and attempt to login with an incorrect password, look for "Login attempt or request with invalid authentication from" entries in the log file:
# tail -f /opt/homeassistant/.homeassistant/home-assistant.log
2018-08-29 14:25:00 DEBUG (MainThread) [homeassistant.components.mqtt] Received message on home/outside/humidity: b'59.9'
2018-08-29 14:26:52 DEBUG (MainThread) [homeassistant.components.mqtt] Received message on home/outside/temperature: b'10.9'
2018-08-29 14:27:13 DEBUG (MainThread) [homeassistant.components.mqtt] Received message on home/outside/tanklevel: b'43'
2018-08-29 14:28:15 WARNING (MainThread) [homeassistant.components.http.ban] Login attempt or request with invalid authentication from xxx.xxx.xxx.xxx
If that was successful then it's time to get fail2ban installed and configured, as with my previous post EPEL needs to be enabled to install fail2ban:
$ sudo su -
# dnf install epel-release
# dnf install -y fail2ban
Fail2ban doesn't have a predefined configuration for Home Assistant so a filter needs to be created:
# nano /etc/fail2ban/filter.d/ha.conf
[INCLUDES]
before = common.conf

[Definition]
failregex = ^%(__prefix_line)s.*Login attempt or request with invalid authentication from .*$
ignoreregex =
Now a jail configuration needs to be created, be sure that logpath is the actual path to your home-assistant.log log file:
# nano /etc/fail2ban/jail.d/ha.conf
[DEFAULT]
# 3600 seconds = 1 hour
bantime = 30
maxretry = 3

# Email config
sender = vmmgt01@hipowered.net
destemail = bircoe@gmail.com

# Action "%(action_mwl)s" will ban the IP and send an email notification including whois data and log entries.
action = %(action_mwl)s

[ha]
enabled = true
filter = ha
logpath = /home/homeassistant/.homeassistant/home-assistant.log
Fail2ban can now be started:
# systemctl start fail2ban
Confirm that its running:
# systemctl status fail2ban
And lastly check that there are active jails:
# fail2ban-client status
Status
|- Number of jail: 1
`- Jail list: ha
Now test that fail2ban detects Home Assistant login attempts, tail the fail2ban log file then log out of and back into the Home Assistant web interface with an invalid password, it should result in log entries showing the failed attempts:
# tail -f -n 20 /var/log/fail2ban.log
2018-08-29 13:25:37,907 fail2ban.server         [10208]: INFO    Starting Fail2ban v0.10.3.fix1
2018-08-29 13:25:37,916 fail2ban.database       [10208]: INFO    Connected to fail2ban persistent database '/var/lib/fail2ban/fail2ban.sqlite3'
2018-08-29 13:25:37,918 fail2ban.jail           [10208]: INFO    Creating new jail 'ha'
2018-08-29 13:25:37,922 fail2ban.jail           [10208]: INFO    Jail 'ha' uses poller {}
2018-08-29 13:25:37,922 fail2ban.jail           [10208]: INFO    Initiated 'polling' backend
2018-08-29 13:25:37,932 fail2ban.filter         [10208]: INFO    Added logfile: '/opt/homeassistant/.homeassistant/home-assistant.log' (pos = 5873, hash = 02ec3aefc005465a6cd8db91eff2d5e57c45757e)
2018-08-29 13:25:37,932 fail2ban.filter         [10208]: INFO      encoding: UTF-8
2018-08-29 13:25:37,933 fail2ban.filter         [10208]: INFO      maxRetry: 3
2018-08-29 13:25:37,934 fail2ban.filter         [10208]: INFO      findtime: 600
2018-08-29 13:25:37,934 fail2ban.actions        [10208]: INFO      banTime: 30
2018-08-29 13:25:37,938 fail2ban.jail           [10208]: INFO    Jail 'ha' started
2018-08-29 13:27:49,125 fail2ban.filter         [10208]: INFO    [ha] Found xxx.xxx.xxx.xxx - 2018-08-29 13:27:48
2018-08-29 13:27:51,330 fail2ban.filter         [10208]: INFO    [ha] Found xxx.xxx.xxx.xxx - 2018-08-29 13:27:51
2018-08-29 13:27:52,533 fail2ban.filter         [10208]: INFO    [ha] Found xxx.xxx.xxx.xxx - 2018-08-29 13:27:52
2018-08-29 13:27:52,678 fail2ban.actions        [10208]: NOTICE  [ha] Ban xxx.xxx.xxx.xxx
2018-08-29 13:28:23,941 fail2ban.actions        [10208]: NOTICE  [ha] Unban xxx.xxx.xxx.xxx
Now that fail2ban is working it can be enabled for startup at boot time, we can also raise the bantime, I'm using 8 hours which should deter people from trying again:
# sed -i 's/bantime = 30/bantime = 28800/g' /etc/fail2ban/jail.d/local.conf
# systemctl enable fail2ban
# systemctl restart fail2ban
A final note, if you need to unban an IP it can be done with fail2ban-client:
# fail2ban-client set JAILNAME unbanip IPADDRESS
eg:
# fail2ban-client set sshd unbanip 1.xxx.xxx.158

Fail2ban on CentOS 7

I have an internet facing ssh server that I use to get into my network when away from home. The ssh server is running on port 22 but rather than opening port 22 to the internet I have port 2222 on my firewall forwarded to 22 on the internal ssh server, this helps to limit the amount of brute force ssh attempts I get but they still happen on a regular basis.

I originally wrote a script which used the lastb command and generated an email with whois info about the IP address trying to brute force and was run from cron every minute, I then grabbed the IP address from those emails and added them to a firewall rule to block them from connecting to my network. Unfortunately for me (in a time zone sense) I live in Australia, most of these attacks happen while I'm sleeping and I can wake up to 100+ emails alerting me of brute force attempts.

Setting up fail2ban seemed like a good option to help slow down these attacks while I am sleeping and it turns out the email alerts can contain whois data making my script redundant!

Fail2ban is not in the official CentOS repos but is included in EPEL (Extra Packages for Enterprise Linux), enabling EPEL is trival so lets start with that:
$ sudo su -
# yum install epel-release
# yum install -y fail2ban
That's it, fail2ban is installed, but before starting fail2ban a basic config needs to be created. Rather than editing /etc/fail2ban/jail.conf it is safer to create a new file in /etc/fail2ban/jail.d/ in case future fail2ban updates replace /etc/fail2ban/jail.conf:
# nano /etc/fail2ban/jail.d/local.conf
This is a basic configuration to get fail2ban up and running with a jail for ssh:
[DEFAULT]
# 3600 seconds = 1 hour
bantime = 30

# Email config
sender = email@address.com
destemail = email@address.com

# Action "%(action_mwl)s" will ban the IP and send an email notification including whois data and log entries.
action = %(action_mwl)s

# Configure fail2ban to use firewalld for banning an IP address
banaction = firewallcmd-ipset

[sshd]
enabled = true
At this point we can start fail2ban:
# systemctl start fail2ban
Confirm that its running:
# systemctl status fail2ban
And lastly check that there are actives jails:
# fail2ban-client status
Status
|- Number of jail: 1
`- Jail list: sshd
At this point you can test if fail2ban is working, fail2ban logs to /var/log/fail2ban.log, tailing this file then attempting a login from an external source with an invalid username and password will result in log entries showing what fail2ban is doing:
# /var/log/fail2ban.log
2018-08-29 12:14:21,852 fail2ban.server         [5587]: INFO    Changed logging target to /var/log/fail2ban.log for Fail2ban v0.9.7
2018-08-29 12:14:21,853 fail2ban.database       [5587]: INFO    Connected to fail2ban persistent database '/var/lib/fail2ban/fail2ban.sqlite3'
2018-08-29 12:14:21,856 fail2ban.jail           [5587]: INFO    Creating new jail 'sshd'
2018-08-29 12:14:21,887 fail2ban.jail           [5587]: INFO    Jail 'sshd' uses systemd {}
2018-08-29 12:14:21,916 fail2ban.jail           [5587]: INFO    Initiated 'systemd' backend
2018-08-29 12:14:21,918 fail2ban.filter         [5587]: INFO    Set maxRetry = 5
2018-08-29 12:14:21,919 fail2ban.filter         [5587]: INFO    Set jail log file encoding to UTF-8
2018-08-29 12:14:21,920 fail2ban.actions        [5587]: INFO    Set banTime = 30
2018-08-29 12:14:21,921 fail2ban.filter         [5587]: INFO    Set findtime = 600
2018-08-29 12:14:21,921 fail2ban.filter         [5587]: INFO    Set maxlines = 10
2018-08-29 12:14:22,112 fail2ban.filtersystemd  [5587]: INFO    Added journal match for: '_SYSTEMD_UNIT=sshd.service + _COMM=sshd'
2018-08-29 12:14:22,142 fail2ban.jail           [5587]: INFO    Jail 'sshd' started
2018-08-29 12:14:22,206 fail2ban.filter         [5587]: INFO    [sshd] Found 1.xxx.xxx.158
2018-08-29 12:14:22,209 fail2ban.filter         [5587]: INFO    [sshd] Found 1.xxx.xxx.158
2018-08-29 12:14:22,211 fail2ban.filter         [5587]: INFO    [sshd] Found 1.xxx.xxx.158
2018-08-29 12:14:22,214 fail2ban.filter         [5587]: INFO    [sshd] Found 1.xxx.xxx.158
2018-08-29 12:14:22,216 fail2ban.filter         [5587]: INFO    [sshd] Found 1.xxx.xxx.158
2018-08-29 12:14:22,219 fail2ban.filter         [5587]: INFO    [sshd] Found 1.xxx.xxx.158
2018-08-29 12:14:22,931 fail2ban.actions        [5587]: NOTICE  [sshd] Ban 1.xxx.xxx.158
2018-08-29 12:14:53,479 fail2ban.actions        [5587]: NOTICE  [sshd] Unban 1.xxx.xxx.158
I attempted sshing from my phone with a bogus username/password and as can be seen in the logs it results in a 30 second ban with corresponding email alerts.

Now that fail2ban is working it can be enabled for startup at boot time, we can also raise the bantime, I'm going to go with 8 hours so that I have enough time to get IP's addded to the block list:
# sed -i 's/bantime = 30/bantime = 28800/g' /etc/fail2ban/jail.d/local.conf
# systemctl enable fail2ban
# systemctl restart fail2ban
A final note, if you need to unban an IP it can be done with fail2ban-client:
# fail2ban-client set JAILNAME unbanip IPADDRESS
eg:
# fail2ban-client set sshd unbanip 1.xxx.xxx.158
I still plan on using the emails to block IP addresses at the firewall but this will keep me from waking up to 150 emails telling me an IP from China is trying to brute force my ssh box!

Dig cheat sheet

In a previous job one of my day-to-day tasks was managing customer public DNS records, as a result my dig cheat sheet became pretty comprehensive, these are some of the more useful commands I had saved in my notes.

Before I go too far its worth mentioning that dig will look for the file ${HOME}/.digrc, adding options to this file cause them to be used each time dig is run, for instance adding "+noall +answer" to ${HOME}/.digrc will cause dig to use these options unless overridden with options such as +all.

Of course there are online options such as Googles Dig Toolbox, its a nice simple tool that does what you expect it would.
https://toolbox.googleapps.com/apps/dig/

On with the dig commands...
Return only the answer
$ dig +noall +answer youtube.com
youtube.com.  300 IN A 172.217.167.110
Return only IP address
$ dig +short youtube.com
172.217.167.110
Return only the answer from Googles DNS server
dig +short @8.8.8.8 youtube.com
youtube.com.  299 IN A 172.217.167.110
Reverse lookup
$ dig -x 8.8.8.8
8.8.8.8.in-addr.arpa. 86053 IN PTR google-public-dns-a.google.com.
Query multiple domains
$ dig +noall +answer google.com +noall +answer duckduckgo.com
google.com.  299 IN A 172.217.167.110
duckduckgo.com.  60 IN A 52.62.168.95
duckduckgo.com.  60 IN A 13.55.4.72
duckduckgo.com.  60 IN A 54.206.51.242
Find authoritative name servers for the zone and display SOA records
$ dig +nssearch google.com
SOA ns1.google.com. dns-admin.google.com. 210500962 900 900 1800 60 from server 216.239.34.10 in 184 ms.
SOA ns1.google.com. dns-admin.google.com. 210522041 900 900 1800 60 from server 216.239.36.10 in 184 ms.
SOA ns1.google.com. dns-admin.google.com. 210500962 900 900 1800 60 from server 216.239.38.10 in 218 ms.
SOA ns1.google.com. dns-admin.google.com. 210500962 900 900 1800 60 from server 216.239.32.10 in 221 ms.
Ask Googles DNS server for ANY type of record, ANY can be substituted for A, AAAA, CAA, CNAME, MX, NS, PTR, SOA, SRV, TXT, TTL
$ dig +noall +answer ANY @8.8.8.8 youtube.com
youtube.com.  299 IN AAAA 2404:6800:4006:801::200e
youtube.com.  299 IN A 216.58.220.110
youtube.com.  599 IN MX 30 alt2.aspmx.l.google.com.
youtube.com.  21599 IN NS ns2.google.com.
youtube.com.  21599 IN CAA 0 issue "pki.goog"
youtube.com.  3599 IN TXT "google-site-verification=OQz60vR-YapmaVrafWCALpPyA8eKJKssRhfIrzM-DJI"
youtube.com.  3599 IN TXT "v=spf1 include:google.com mx -all"
youtube.com.  3599 IN TXT "facebook-domain-verification=64jdes7le4h7e7lfpi22rijygx58j1"
youtube.com.  59 IN SOA ns1.google.com. dns-admin.google.com. 210522041 900 900 1800 60
youtube.com.  21599 IN NS ns3.google.com.
youtube.com.  599 IN MX 50 alt4.aspmx.l.google.com.
youtube.com.  21599 IN NS ns1.google.com.
youtube.com.  599 IN MX 40 alt3.aspmx.l.google.com.
youtube.com.  599 IN MX 20 alt1.aspmx.l.google.com.
youtube.com.  21599 IN NS ns4.google.com.
youtube.com.  599 IN MX 10 aspmx.l.google.com.
Trace the delegation path from the root name servers for the name being looked up.
$ dig +trace @8.8.8.8 google.com
; <<>> DiG 9.9.4-RedHat-9.9.4-61.el7 <<>> +trace @8.8.8.8 google.com
; (1 server found)
;; global options: +cmd
.   96073 IN NS a.root-servers.net.
.   96073 IN NS c.root-servers.net.
.   96073 IN NS l.root-servers.net.
.   96073 IN NS f.root-servers.net.
.   96073 IN NS k.root-servers.net.
.   96073 IN NS h.root-servers.net.
.   96073 IN NS j.root-servers.net.
.   96073 IN NS b.root-servers.net.
.   96073 IN NS e.root-servers.net.
.   96073 IN NS g.root-servers.net.
.   96073 IN NS m.root-servers.net.
.   96073 IN NS i.root-servers.net.
.   96073 IN NS d.root-servers.net.
.   96073 IN RRSIG NS 8 0 518400 20180908050000 20180826040000 41656 . N2z1m/ifQYQPjsC3gN7mr0b2hJ8NTIBXvjv8I/S201I5DdS0csMQ2Vg0 tXyLwdZOMaFlezWnFFozHntboA4xzb5DNTXlC1WhdlIqC6Ohdn1BgjDK g/4weK6oRt6EC/XJufmjLFQ9jYauiID3emM34omJajaFE7klisvldJLv 79WQy/0lBYng4Ei/s2iMBBa9yJGiPHmwfank3Ku7bP2kv1GT+InNZYa9 K22SFpwCNq4waPDi1SDrmboAVqEoE9IeQZy3ABft4b4hA/hu+Nos6Ral F4Xsa2xwTZJhj0ryrO8Ds7WQw3zJXAWJtOM83vv9IGwyGYtvbalhIPYN r/hmng==
;; Received 525 bytes from 8.8.8.8#53(8.8.8.8) in 51 ms

com.   172800 IN NS a.gtld-servers.net.
com.   172800 IN NS b.gtld-servers.net.
com.   172800 IN NS c.gtld-servers.net.
com.   172800 IN NS d.gtld-servers.net.
com.   172800 IN NS e.gtld-servers.net.
com.   172800 IN NS f.gtld-servers.net.
com.   172800 IN NS g.gtld-servers.net.
com.   172800 IN NS h.gtld-servers.net.
com.   172800 IN NS i.gtld-servers.net.
com.   172800 IN NS j.gtld-servers.net.
com.   172800 IN NS k.gtld-servers.net.
com.   172800 IN NS l.gtld-servers.net.
com.   172800 IN NS m.gtld-servers.net.
com.   86400 IN DS 30909 8 2 E2D3C916F6DEEAC73294E8268FB5885044A833FC5459588F4A9184CF C41A5766
com.   86400 IN RRSIG DS 8 1 86400 20180910050000 20180828040000 41656 . iUOMS1sDAQHMjI17fp2vDOm+wT6Z6v/iEeVyQ59m7OVPFzVB1cVTG7cy kDcD1yHmqILhnAiFV/CYg13cZ2XTe0+UEvw0mO7jqaPloc+4zWHf0NGM Ep8veQLjOgSmORUQTaRkPQ24OYI3kpF6+AkNCBfkq9IMdwmziq7HhiSo gJEjW7LrtwkWzaR+jHBGz4zHXoXM7bE4tiDXYJXSPHpLe5KjeFKzBimx QNV+2X6Vx7hz9jvbpjyYZqCLafckDW6++UcaS/veCe/80IpUpLffikM4 RUN6v3irTPgk5pRUdVrsPiHYfDsm/ed0wXdaENZbselhhPagGaWSXitD tVfU/Q==
;; Received 1170 bytes from 193.0.14.129#53(k.root-servers.net) in 67 ms

google.com.  172800 IN NS ns2.google.com.
google.com.  172800 IN NS ns1.google.com.
google.com.  172800 IN NS ns3.google.com.
google.com.  172800 IN NS ns4.google.com.
CK0POJMG874LJREF7EFN8430QVIT8BSM.com. 86400 IN NSEC3 1 1 0 - CK0Q1GIN43N1ARRC9OSM6QPQR81H5M9A NS SOA RRSIG DNSKEY NSEC3PARAM
CK0POJMG874LJREF7EFN8430QVIT8BSM.com. 86400 IN RRSIG NSEC3 8 2 86400 20180901044508 20180825033508 46475 com. lYz9DxGlAM+QMcHa6AjjWj3UHjFRLGHnJ3oN8UG6iTeoxwvPXMK+l+Tt ZJk3lHD/pYmWk4T4xQe2RdFQl9ccdkbLbunYoJVoApa94GVJ/7Nk74zs rB32keLDIklgdG+hhkfFLn8o1hIAAFtRBjIhQBcL9YiVjGY26yt/zlYw 3P8=
S849LHDDSVU9A9N2FIRO5NKMQB321BEP.com. 86400 IN NSEC3 1 1 0 - S84CEFMDU6ABFSN4V0L2VLLOASCD5IV2 NS DS RRSIG
S849LHDDSVU9A9N2FIRO5NKMQB321BEP.com. 86400 IN RRSIG NSEC3 8 2 86400 20180902050340 20180826035340 46475 com. L1lA4etoBOJnRo3qJmMEmaIUFKCT4kYfF1blJnZqirkPjMUcF98lWqab Tnhler0y9KvqSnEWP/IiOAD6IckKXZQefPVYU5xd25JgdxISaI/DM9Qt h9kIHXXNJXislNDrh1u3tNAgprDb0C4dzulPMWYJVJDeVwOLYiPY9DYZ aVQ=
;; Received 772 bytes from 192.42.93.30#53(g.gtld-servers.net) in 246 ms

google.com.  300 IN A 172.217.167.110
;; Received 44 bytes from 216.239.38.10#53(ns4.google.com) in 222 ms

Monday, August 27, 2018

Video to GIF shell script using ffmpeg

This is an old post that I copied from my old blog.

I like to capture my Xbox One gaming moments to share with friends and have been working on a shell script to quickly convert videos to animated GIFs, I've put together this fairly simple little shell script that takes an input video and uses ffmpeg to convert to an animated GIF.

The GIF file format has a colour limitation of 256 colours, so optimisation needs to be done to ensure a good quality output. The script utilises ffmpegs palettegen options to generate a palette of colours to use in the second pass using the bayer dither option. This is what the palette PNG file looks like (upscaled from 16x16):

The resulting animated GIF turns out quite nice, this was captured from my Xbox One in the Battlefield 1 beta, the file is 480x260 and 4.3MB at 15 frames per second:


The script is available as a gist on my github or it can be copied and pasted from below. First download the script and set it as executable, the simplest method is wget:
$ wget https://gist.githubusercontent.com/mplinuxgeek/dcbc3a4d0f51f2b445608e3da832ebb5/raw/vid2gif.sh
$ chmod +x vid2gif.sh
To use this script just execute it with the filename of a video appended, the script defaults to a width of 480 pixels and frame rate of 15fps:
$ ./vid2gif.sh video.mp4
To change the width and fps of the gif add -w and -f arguments to the script:
$ ./vid2gif.sh -w 320 -f 10 video.mp4

UPDATE: I've recently done some testing with different bayer dithering levels, results are below, I've also added a dithering argument to the script.

To specify a different dithering level use the -d argument:
$ ./vid2gif.sh -w 320 -f 10 -d 5 video.mp4

Dither Size Notes
0 3.91MB Largest file, worst image quality,
1 3.33MB Noticeable vertical lines
2 3.19MB Vertical lines still visible but better than 1
3 3.05MB Vertical lines gone, very good image quality
4 2.90MB Hard to pick a difference from 3
5 2.82MB Hard to notice a difference from 4 but some colour banding evident


Bonus: if you want to be able to run the script from anywhere without specifying the full path or ./ simply copy it to /usr/local/bin:
$ sudo cp vid2gif.sh /usr/local/bin/
The script can now be run from any directory:
$ vid2gif.sh
Script: More information on GIF optimisation and bayer dithering can be found here, this was also the basis of my original script:
http://blog.pkh.me/p/21-high-quality-gif-with-ffmpeg.html

Just a warning, keep the input videos short, GIFs can get very large very quickly.

LVM Snapshots

This is a basic overview of how to work with lvm snapshots, I wrote this article to help me remember the procedure and for later reference.

Snapshots provide a point in time snapshot of a volume, allowing you to undertake some operations that you may want to roll back later, i.e. testing new software, testing deployment procedures, upgrading a system etc.

NOTE: To restore a snapshot a volume needs to be activated, if the volume is currently mounted this will involve unmounting, deactivating, activating then finally mounting the volume. If a snapshot is taken of the root logical volume, then the system will need to be restarted to finish the process as you can’t unmount the root volume.

Setup
This example requires that we have a volume group with some free space, I've set up a CentOS 7 virtual machine for this example with 5GB of free space in the centos volume group: The disk layout on the virtual machine looks like this:
# lsblk
NAME            MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda               8:0    0   45G  0 disk 
├─sda1            8:1    0    1G  0 part /boot
└─sda2            8:2    0   44G  0 part 
  ├─centos-root 253:0    0 35.1G  0 lvm  /
  └─centos-swap 253:1    0  3.9G  0 lvm  [SWAP]
# pvs
  PV         VG     Fmt  Attr PSize   PFree
  /dev/sda2  centos lvm2 a--  <44.00g 5.00g
# vgs
  VG     #PV #LV #SN Attr   VSize   VFree
  centos   1   2   0 wz--n- <44.00g 5.00g
# lvs
  LV   VG     Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  root centos -wi-ao---- <35.12g                                                    
  swap centos -wi-ao----  <3.88g                   
To begin lets create a logical volume called datavol01:
# lvcreate -L 1G -n datavol1 centos
  Logical volume "datavol1" created.
Create a filesystem on new logical volume:
mkfs.xfs /dev/centos/datavol1
meta-data=/dev/centos/datavol1   isize=512    agcount=4, agsize=65536 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=0, sparse=0
data     =                       bsize=4096   blocks=262144, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=1
log      =internal log           bsize=4096   blocks=2560, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
Mount the new logical volume:
# mount /dev/centos/datavol1 /mnt/
Create a file in the mount for demonstration purposes:
# touch /mnt/this
Check that the file was created as expected:
# ls -lah /mnt
total 0
drwxr-xr-x.  2 root root  30 Aug  9 22:20 .
dr-xr-xr-x. 17 root root 224 Aug  8 01:05 ..
-rw-r--r--.  1 root root   0 Aug  9 22:20 this
Creating snapshots
Let's create a snapshot of datavol1 using 1GB of free space in the volume group:
# lvcreate -L 1G -s -n snap01 /dev/centos/datavol1 
  Logical volume "snap02" created.
1GB was chosen for testing while writing this article, in practice you would use a size that’s appropriate for the task, see note at the bottom about auto extending snapshots.

If more space is required for the snapshot then it can be extended using lvextend:
# lvextend -L +1G /dev/centos/snap01
  Size of logical volume centos/snap01 changed from 1.00 GiB (256 extents) to 2.00 GiB (512 extents).
  Logical volume centos/snap01 successfully resized.
Create snapshot with 100% of the free space in a volume group Alternatively a snapshot can be created using all the available space in volume group:
# lvcreate -l100%FREE -s -n snap01 /dev/centos/datavol1
Removing snapshots
Now that we have a snapshot create another file:
# touch /mnt/that
Removing the snapshot retains changes since taking the snapshot:
# lvremove /dev/centos/snap01
Do you really want to remove active logical volume centos/snap01? [y/n]: y
  Logical volume "snap01" successfully removed
List the files in /mnt/, notice they are both still there?
# ls -lah /mnt
total 0
drwxr-xr-x.  2 root root  30 Aug  9 22:20 .
dr-xr-xr-x. 17 root root 224 Aug  8 01:05 ..
-rw-r--r--.  1 root root   0 Aug  9 22:20 that
-rw-r--r--.  1 root root   0 Aug  9 22:20 this
Now let's recreate a snapshot to demonstrate rolling back (merging):
# lvcreate -L 1G -s -n snap01 /dev/centos/datavol1
Create another file to demonstrate the merge process:
# touch /mnt/me
To further demonstrate the process touch the first file we create to change its modified date:
# touch /mnt/this
List /mnt again to see how it looks now:
# ls -lah /mnt
total 0
drwxr-xr-x.  2 root root  40 Aug  9 22:22 .
dr-xr-xr-x. 17 root root 224 Aug  8 01:05 ..
-rw-r--r--.  1 root root   0 Aug  9 22:22 me
-rw-r--r--.  1 root root   0 Aug  9 22:20 that
-rw-r--r--.  1 root root   0 Aug  9 22:22 this
Merging snapshots
Merging a snapshot will undo changes since the snapshot was created restoring the volume to its pre-snapshot state. Before merging the snapshot, the volume should be unmounted and deactivated, this can be done after:
# umount /mnt
# lvchange -an /dev/centos/datavol1 
Merge the snapshot to restore to the pre-snapshot state:
# lvconvert --merge /dev/centos/snap01
  Delaying merge since origin is open.
  Merging of snapshot centos/snap01 will occur on next activation of centos/datavol1.
Reactivate and remount the volume:
# lvchange -ay /dev/centos/datavol1 
# mount /dev/centos/datavol1 /mnt/
Do an ls of /mnt/ again, notice that "me" is gone and "this" has reverted to its earlier timestamp?
# ls -lah /mnt/
total 0
drwxr-xr-x.  2 root root  30 Aug  9 22:20 .
dr-xr-xr-x. 17 root root 224 Aug  8 01:05 ..
-rw-r--r--.  1 root root   0 Aug  9 22:20 that
-rw-r--r--.  1 root root   0 Aug  9 22:20 this
We can now remove the datavol1 logical volume:
# umount /mnt
# lvremove /dev/centos/datavol1
Auto Extend Snapshot
Be aware that if a snapshot is filled there is potential for data loss, to prevent this lvm can be configured to auto extend the snapshot volume by changing values in the /etc/lvm/lvm.conf, the default CentOS conf file has snapshot_autoextend_threshold set to 100 which disables the auto extend feature, change it to a value lower than 100.
# grep snapshot_autoextend /etc/lvm/lvm.conf
# Configuration option activation/snapshot_autoextend_threshold.
# Also see snapshot_autoextend_percent.
# snapshot_autoextend_threshold = 70
snapshot_autoextend_threshold = 100
# Configuration option activation/snapshot_autoextend_percent.
# snapshot_autoextend_percent = 20
snapshot_autoextend_percent = 20
After changing the config the lvm monitor process will need to be restarted:
# lvm2-monitor.service

Friday, August 10, 2018

Installing and configuring Oxidized on CentOS 7

I was recently asked to look at learning a bit about Oxidized for an upcoming project. If you've found this blog you already know what Oxidized is so I won't bother explaining it.

I thought it would be worth documenting what I found as there are a few considerations and issues to deal with on CentOS 7.

First, the version of Ruby available in the official CentOS 7 repositories is Ruby v2.0.0, Oxidized will fail to install under Ruby 2.0.0 as dependencies of Oxidized require a version of Ruby greater than 2.2/2.3.

ERROR: Error installing oxidized-web:
     puma requires Ruby version >= 2.2.

ERROR: Error installing oxidized-web:
     net-telnet requires Ruby version >= 2.3.0.
There are a couple of ways to deal with this, install older versions of puma and net-telnet that work with Ruby 2.0.0 or install a newer version of Ruby via rvm (Ruby Version Manager).

NOTE: The version of rdoc included with Ruby 2.0.0 will throw several errors during documentation parsing, it is recommended to use a newer version of Ruby which resolves these errors. Example of errors:

unable to convert "\x90" from ASCII-8BIT to UTF-8 for lib/oxidized/web/public/fonts/glyphicons-halflings-regular.eot, skipping
unable to convert "\xA1" from ASCII-8BIT to UTF-8 for lib/oxidized/web/public/fonts/glyphicons-halflings-regular.woff, skipping
Install dependencies
At this point the decision needs to be made to either upgrade Ruby or stick with version 2.0.0 in the CentOS repositories.

If using Ruby v2.0.0:
# yum install ruby ruby-devel make cmake which sqlite-devel openssl-devel libssh2-devel gcc libicu-devel gcc-c++
If you've decided to use a newer version of Ruby it will be installed via rvm so it won’t be necessary to install ruby and ruby-devel via yum.
# yum install make cmake which sqlite-devel openssl-devel libssh2-devel gcc libicu-devel gcc-c++
Install older versions of puma and net-telnet gems
Only do this if using Ruby v2.0.0
Before installing oxidized via gem install older versions of puma and net-telnet manually.

Install puma
# gem install puma -v 3.11.4
Install net-telnet
# gem install net-telnet -v 0.1.0
Install a newer version of ruby using Ruby Version Manager (rvm)
Skip this if using Ruby v2.0.0
RVM is a command-line tool which allows you to easily install, manage, and work with multiple ruby environments from interpreters to sets of gems.

Install rvm
# curl -sSL https://rvm.io/mpapis.asc | gpg --import -
# curl -L get.rvm.io | bash -s stable
# source /etc/profile.d/rvm.sh
Install Ruby 2.4 via rvm
# rvm reload
# rvm requirements run
# rvm install 2.4
# rvm list
Installing Oxidized as a Ruby "Gem"
RubyGems provides a repository of Ruby Libraries, the gem command allows search, list, install and uninstall of gems from the RubyGems repository.

The below packages (and their dependencies) will be installed as ruby gems:
oxidized - oxidized core
oxidized-script - oxidized cli and Library
oxidized-web - oxidized web interface and rest api

Install oxidized
# gem install oxidized oxidized-script oxidized-web
Create oxidized user
# useradd oxidized
Configure oxidized
Run oxidized to create config dirs, running Oxidized for the first time will create the necessary directory structure and a base configuration file.
# su - oxidized
$ oxidized
Exit Oxidized with ctrl+c
Device configs directory
This directory is used to store the configs from devices, it is advisable to store these on a separate volume group or if the host has limited disk space an NFS share.

Create directory to store plain file device configurations, if using plain file configs create a directory for the configs
$ mkdir -p /var/lib/oxidized/configs
If using git instead of plain files create a different directory
$ mkdir -p /var/lib/oxidized/devices.git
Edit configuration file
The below covers basic configuration of web interface, output location and an input source.
# su - oxidized
$ nano /home/oxidized/.config/oxidized/config
To allow the web interface to be accessible from computers other than localhost change the rest entry, if the web interface won’t be used this can be set to false.

Replace:
rest: 127.0.0.1:8888
With:
rest: 0.0.0.0:8888
Add to "source:" section:
  default: csv
  csv:
    file: "/home/oxidized/.config/oxidized/router.db"
    delimiter: !ruby/regexp /:/
    map:
      name: 0
      ip: 1
      model: 2
      username: 3
      password: 4
    vars_map:
      enable: 5
If using plain files add the below to the "output:" section:
  file:
    directory: "/var/lib/oxidized/configs"
Alternatively, and recommended Oxidized can use a local git repository providing version control:
  default: git
  git:
    user: oxidized
    email: bircoe@gmail.com
    repo: "/var/lib/oxidized/devices.git"
More information on git output: https://github.com/ytti/oxidized/blob/master/docs/Outputs.md#output-git

Now we need to create a database of devices
$ nano /home/oxidized/.config/oxidized/router.db
Format of db is as follows:
Hostname:IP:OS:username:password
Example:
Heimdall:192.168.1.1:opnsense:username:password
More information on CSV Source:
https://github.com/ytti/oxidized/blob/master/docs/Sources.md#source-csv

A list of supported OSes can be found at the below link:
https://github.com/ytti/oxidized/blob/master/docs/Supported-OS-Types.md

Testing Oxidized
At this point Oxidized should start successfully, Oxidized can be tested by starting it manually, example of successful start:
# su - oxidized
$ oxidized
I, [2018-08-08T19:28:12.697453 #876]  INFO -- : Oxidized starting, running as pid 876
I, [2018-08-08T19:28:12.698238 #876]  INFO -- : lib/oxidized/nodes.rb: Loading nodes
I, [2018-08-08T19:28:12.772161 #876]  INFO -- : lib/oxidized/nodes.rb: Loaded 1 nodes
Puma starting in single mode...
* Version 3.12.0 (ruby 2.4.4-p296), codename: Llamas in Pajamas
* Min threads: 0, max threads: 16
* Environment: development
* Listening on tcp://0.0.0.0:8888
Use Ctrl-C to stop
I, [2018-08-08T19:28:14.165742 #876]  INFO -- : Configuration updated for /192.168.1.1
Configure auto start
The Oxidized gem provides a prebuilt service file for systemd, pay attention to the path, it will change with the Ruby version and the version of Oxidized.
# sudo cp /usr/local/rvm/gems/ruby-2.4.4/gems/oxidized-0.24.0/extra/oxidized.service /lib/systemd/system/
Copy wrapper script to the location specified in the oxidized.service file
# cp /usr/local/rvm/gems/ruby-2.4.4/wrappers/oxidized /usr/local/bin/oxidized
Enable Oxidized to start at boot
# systemctl enable oxidized
Start Oxidized
# systemctl start oxidized
Confirm that Oxidized has started correctly
# systemctl status oxidized
Troubleshooting
Run oxidized manually to check for error messages.
# su - oxidized
$ oxidised

I, [2018-08-08T19:23:36.288017 #32734] INFO -- : Oxidized starting, running as pid 32734
F, [2018-08-08T19:23:36.291134 #32734] FATAL -- : Oxidized crashed, crashfile written in /home/oxidized/.config/oxidized/crash
no source csv config, edit ~/.config/oxidized/config
In the above exampled there is no source configured in the config file, rectify any errors until Oxidized starts without errors.

I hope someone finds this helpful, please leave a comment if I helped you or you have a suggestion.

Raspberry Pi 4 heatsink testing

With all the talk of how hot the Raspberry Pi 4 runs I thought I’d do some testing to see how different heatsinks perform. I had a few heats...