Helpful git tips

So chatting with a colleague about some git tricks this week I discovered that not everyone was aware you could change the bash prompt to give certain git status, such as branch, and things like if you’re in merge/am/bisect modes etc. I’ve had the pieces in my .bashrc for so long I had literally got to the point it was assumed functionality that every one has enabled.

The following snippet is what I have in my ~/.bashrc:

# git branch display
source /usr/share/git-core/contrib/completion/git-prompt.sh
export GIT_PS1_SHOWDIRTYSTATE=true
export GIT_PS1_SHOWUNTRACKEDFILES=true
export PS1='[\[\e[0;32m\]\u\[\e[0m\]@\[\e[0;35m\]\h\[\e[0m\] \W\[\e[0;33m\]$(__git_ps1 " (%s)")\[\e[0m\]]\[\e[0;32m\]\$ \[\e[0m\]'

And with that you get a more useful prompt that looks like the prompt below, in this case merging bits, for all git repos with added colours too!:

[peter@localhost linux (master *+|MERGING)]$

Three ways to speed up dnf on arm devices

I have a large bunch of Arm Single Board Computers I use for testing a lot. Most of the testing ends up being pretty basic stuff like firmware, kernels, and the various bits of hardware peripherals that people use like storage, network, display and sound output, plus things like sensors and HAT support.

The problem is that these devices often aren’t the fastest in the world for various reasons so I want to be able to apply updates to the basic system as quickly as possible to find out the results. Over time I’ve worked out that these three things speed up dnf quite a bit for the sort of testing I wish to do are as follows:

  1. Disable modularity:
    sed -i 's/enabled=1/enabled=0/' /etc/yum.repos.d/fe*mod*
  2. Don’t install weak dependencies:
    echo "install_weak_deps=False" >> /etc/dnf/dnf.conf
  3. Disable dnf makecache. It never seems to be up to date when you need it anyway:
    systemctl disable dnf-makecache; systemctl mask dnf-makecache

You may need to re-do some of these each major update as they seem to want to force you to have them every time.

Increasing a libvirt/KVM virtual machine disk capacity

There’s a bunch of howto’s on the internet for increasing the size of a virtual disk of a VM. Of course the best is to use the very useful libguestfs-tools options but there’s been some improvement in tools like sfdisk so I thought I’d document what I did for reference using tools I already had installed.

First shutdown the VM. Once it’s shutdown you need to work out where the disk is located. As this VM is running from my local machine and is just using a raw disk this is straight forward. You can get the details from the virt-manager GUI or virsh dumpxml VM-Name.

Next up we use qemu-img, it’s installed by default with the libvirt stack, to add the extra space we need, in theory this can be done with the VM online, this is a random test VM so online time doesn’t matter, and of course if the VM matters to you there should be a proper backup done first! The fdisk isn’t necessary, it just allows you to see that the extra space is there.

# qemu-img resize /var/lib/libvirt/images/VM-Name.raw +4G
# fdisk -l /var/lib/libvirt/images/VM-Name.raw
Disk /var/lib/libvirt/images/VM-Name.raw: 8 GiB, 8589934592 bytes, 16777216 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0xe8b201aa

Device                                            Boot   Start     End Sectors Size Id Type
/var/lib/libvirt/images/VM-Name.raw1 *       2048 2099199 2097152   1G 83 Linux
/var/lib/libvirt/images/VM-Name.raw2      2099200 8388607 6289408   3G 83 Linux
#

Now power up the VM, login as root (or use sudo) for the next bits on the VM. The sfdisk tool has had a bunch of improvements over the last few years for partitioning. If you’ve not used it or looked at it recently I recommend checking the well written man page. Here I’m just expanding last partition (partition 2) on the disk to the maximum size the disk offers. For all the other possibilities “man sfdisk” is your friend!

# echo ", +" | sfdisk -N 2 /dev/vda --no-reread
# partprobe
# resize2fs /dev/vda2

And with that you should be good to go, df and friends will show you the new space, no reboot needed! The VM I have here is very basic partitions, no LVM etc so straight forward, if you have LVM there’s lots of docs on how to deal with that elsewhere.

Securing home networks and IoT for family at holiday time

Many people head home to family at some point over the holiday season, whether that be like today for Thanksgiving in the US, Christian Christmas at the end of December or one of the many and varied holidays. During that time most people that are technical will be asked to help fix or setup various computer or internet related devices that family members that are not so technical have acquired or broken since the last time they ventured home. For me it use to be the regular upgrade/replacement of the Virus Scan and anti malware software. These days it tends to be patching of phones and tablets and all sorts of other devices.

So what can the average technical person do to help minimise risks to family members, or stop them from being part of a large botnet sometime in the future, without making the technology hard or even impossible for family to use, or to minimise the calls throughout the year.

Router

The first port of call should always be the router. Often these just get stuffed in the corner, on a bookshelf or somewhere out of site and forgotten. From a security point of view they are the most important, they are the thing that primarily protects everything else as they’re the ingress/egress point of the network. So what to do and change on these devices:

  • Upgrade the firmware to the latest supported version, and configure it to auto-upgrade if it’s an option. If the last firmware is ancient consider moving to a third party firmware like LEDE Project or an OpenWRT dirivative. Worst case scenario throw it away and give them a new one as their present.
  • Change the admin password.
  • Change the SSID and set a reasonable password.
  • Ensure that the admin interface isn’t available on the WAN link, do a port scan.
  • Turn off port forwarding and UPnP on the router.
  • Switch it to OpenDNS (208.67.222.222 208.67.220.220), Google Public DNS (8.8.8.8 4.4.4.4), the new Quad9, or even better a combination of them so if one service goes down or disappears their internet will still work.

Phones and Tablets

Ensure the phone is set to auto install new OS firmware releases, also ensure that apps are set to auto update and that if the provider, such as Google Play, has a malware scan option in their App store ensure that’s turned on so it’ll clean up any apps that are discovered to be problematic.

TVs, Bluerays and other Media Players

It’s surprising the number of these devices that have network connections and never get updated. In some cases the network functionality is rarely, if ever used, I know I’ve pretty much disconnected all Blu-ray players from networks, turned off the wireless if it has it, and not ever had a complaint. Often it’s better to replace some of old network media devices with ones that are actively maintained such as Google Chromecast, Amazon Fire, Roku etc. It’s also worth checking if any of these devices have the ability to connect to via ad-hoc means and disable that to limit connections to only those that are on the standard home network.

Various IoT devices

IoT devices should generally, if at all possible, be isolated on their own network. This is easy if as part of securing the router you moved it to LEDE or something similar above, and configure it to have a strict deny-by-default policy. Check the existing network for devices that are connected to it. In some cases there may have been a device connected to it some time ago that have long been forgotten about and are no longer in use, or the manufacturer has ceased to exist and they’re just a compromise waiting to happen masquerading as an expensive paperweight. Those that are in use might not be using the IoT/network functionality, if so turn the network off. Those that remain obviously ensure they’re running the latest firmware, set for auto update, and if possible move them to the IoT network. In some cases it might be possible or better to replace connected lighting if it’s some terrible WiFi/Bluetooth globe with something like the IKEA TRÅDFRI system as it has reasonable security, is of good quality and is affordable. Also don’t forget to check for things like doorbells, locks, cameras and other such devices.

Conslusion

Securing the router and associated DNS is by far and large the most important thing to do, it will help mitigate/protect most of the other problems that loom on the inside. But disconnecting, throwing away, replacement of old devices is sometimes the easiest way to fix them too, or else isolating them.

Let me know what else people do, and what I missed.

Configuring HTTP/2 with Apache on Fedora

HTTP/2 is the new version of the well known HTTP protocol which has been at the venerable 1.1 since late last century. Version 2 was derived out of Google’s SPDY protocol and it’s a binary protocol over the text based 1.1. It introduces a bunch of improvements including reducing latency, multiplexing, and server push. There’s some useful improvements that will be great for things like apps that use WebSockets. The Apache httpd daemon has included complete support for HTTP/2 since the 2.4.17 release in the form of mod_http2.

First you should configure your site with SSL, I suggest using LetsEncrypt/certbot as documented in this Fedora Magazine article.

Then you need to make sure the module is loaded, at least in Fedora 25 this is enabled in /etc/httpd/conf.modules.d/00-base.conf by default:

LoadModule http2_module modules/mod_http2.so

Then you just need to enable the protocol in either the general configuration or in specific VirtualHost directives for specific sites:

# for a https server
Protocols h2 http/1.1

# for a http server
Protocols h2c http/1.1

Then it’s just a systemctl restart httpd to make the changes take effect.

To test whether you’re serving over HTTP/2 you can test using this HTTP/2 testing site or with the OpenSSL client (check for “ALPN protocol: h2” in the output) with the following command:

openssl s_client -alpn h2 -connect HOSTNAME:443

Note: HTTP/2 is not currently supported in the httpd shipped in RHEL.

Connect to a wireless network using command line nmcli

I use a lot of minimal installs on various ARM devices. They’re good because they’re quick to download and you can test most of the functionality of the device to ensure it’s working or to quickly test specific functionality but of course it doesn’t have a GUI to use the nice graphical tools which are useful to quickly connect to a wifi network or other things.

This where nmcli comes in handy to quickly do anything you can do with the GUI. To connect to a wireless network I do:

Check you can see the wireless NIC and that the radio is enabled (basically “Airplane” mode):

# nmcli radio
WIFI-HW  WIFI     WWAN-HW  WWAN    
enabled  enabled  enabled  enabled 
# nmcli device
DEVICE  TYPE      STATE         CONNECTION 
wlan0   wifi      disconnected  --         
eth0    ethernet  unavailable   --         
lo      loopback  unmanaged     --         

Then to actually connect to a wireless AP:

# nmcli device wifi rescan
# nmcli device wifi list
# nmcli device wifi connect SSID-Name --ask

And that should be enough to get you connected. You can list the connection with nmcli connection and various other options. It’s pretty straight forward.

When creating updates remember to build for rawhide and Fedora 25 (devel)

When ever we branch for a new release of Fedora I, and others, end up spending a non trivial amount of time ensuring that there’s a clean upgrade path for packages. From the moment we branch you need to build new versions and bug fixes of packages for rawhide (currently what will become Fedora 26), for the current stabilising release (what will become Fedora 25) as well as what ever stable releases you need to push the fix for. For rawhide you don’t need to submit it as an update but for the current release that’s stabilising you do need to submit it as an update as it won’t just automagically get tagged into the release.

As a packager you should know this, it’s been like it for a VERY LONG TIME! Yet each cycle from the moment of branching right through to when a new release goes GA I still end up having to fix packages that “get downgraded” when people upgrade between releases!!

So far this cycle I’ve fixed about 20 odd with the latest being bash-completion (built but not submitted as an update for F-25) and certmonger (numerous fixes missing from F-25 and master branch).

The other silly packaging bug I end up having to fix quite a bit is NVR downgrades where even though it’s a newer package the way the NVR is handled makes rpm/dnf/yum think the newer package is a lesser version than the current version and hence you’re new shiny fix won’t actually make it to end users. I see this a lot where people push a beta/RC package to a devel (F-25/rawhide) release. Just something to be aware of, there’s lots of good docs around the way rpm/dnf/yum handles eNVR upgrades.

Changing ssh ports on Fedora or RHEL

I always forget the exact commands to change the port ssh (or any default service in the case of the selinux bits) runs on. It’s nicely simple though!

Edit /etc/ssh/sshd_config to change the port number:

Port 2022

You can add a second line if you wish to initially leave it running on Port 22 too in case something goes wrong, obviously don’t forget to remove it once the new port is working!

To add port 2022 to port contexts, enter:
# semanage port -a -t ssh_port_t -p tcp 2022

You can verify new settings, enter:

# semanage port -l | grep ssh
Sample outputs:
ssh_port_t tcp 2022,22

Reload the sshd service to pick up the new config:
#systemctl reload sshd.service

And of course don’t forget to update your firewall to allow the new port through.

network bridge for libvirt host with a single live IP

The default network config for libvirt is simple and works for most basic use cases but there’s a number of use cases where you need more complex config like Adam outlined for a local bridged config.

I run a number of VMs on a hosted server on the internet and I’ve had on my ToDo list for some time to add a IPSEC site to site VPN but the default network doesn’t make that easy because libvirtd deals with the iptables networking including the NAT automagically.

The network config looks like this:

                    *-----*
192.168.100.0/24  --| Hyp |-(eth0)- internet
 (br0) VM net       *-----*

Create non routed network bridge
Initially create a basic network bridge and disable STP (spanning tree protocol). Note we don’t bind it with eth0 which is a public internet facing interface.

nmcli con add type bridge ifname br0
nmcli con modify bridge-br0 bridge.stp no

I then edited the /etc/sysconfig/network-scripts/ifcfg-bridge-br0 file and to add IP network config and adjust a few bits to get the following:

DEVICE=br0
STP=no
TYPE=Bridge
BOOTPROTO=static
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPADDR=192.168.100.254
NETMASK=255.255.255.0
IPV6INIT=no
IPV6_FAILURE_FATAL=no
NAME=bridge-br0
UUID=
ONBOOT=yes

Once we’ve done that we can bring the bridge online and check the config looks OK:

ifup br0
nmcli c show
nmcli -f bridge con show bridge-br0
ip addr

Now we have a network bridge with an IP address you can now edit any VM configuration and reassign the virtual NICs to the new bridge and adjust the VM network config to the new subnet and assign static IPs to each VM, or configure dhcpd to assign IPs on the br0 interface. Once that’s done you should be able to ping the gateway (192.168.101.254) and have local network connectivity.

Once you’ve moved everything over you can delete the original libvirtd network config.

Outbound NATed networking
Using the traditional iptables.service (firewalld investigation is on my todo list) you can add a basic outbound NAT configuration which restores the last of the missing functionality with the following basic rule set which will NAT by masquerading the br0 network out through the public IP on eth0:

iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
iptables -A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT
iptables -A INPUT -p icmp -j ACCEPT
iptables -A INPUT -i lo -j ACCEPT
iptables -A INPUT -m state --state NEW -p tcp -m tcp --dport 22 -j ACCEPT
iptables -A INPUT -j REJECT --reject-with icmp-host-prohibited
iptables -A FORWARD -i eth0 -o br0 -m state --state RELATED,ESTABLISHED -j ACCEPT
iptables -A FORWARD -i br0 -o eth0 -j ACCEPT
iptables -A FORWARD -j REJECT --reject-with icmp-host-prohibited
iptables-save > /etc/sysconfig/iptables
systemctl enable iptables.service
systemctl start iptables.service

With the network now under complete control of NetworkManager and a IP firewall/NAT configuration under control from a single point it now makes it easier to add things like IPSEC connections and IPv6 configuration both of which are next on the list.

using ssh keys with screen

It always annoyed me I couldn’t use my ssh key in a screen session. Every now and again I would try and work it out with google and some trial and error. Eventually with the help of a couple of good bits off the net I worked out what I thought to be the easiest way to achieve it consistently.

Firstly the ssh config bits:

Add the following to your ~/.ssh/config file, creating it if you don’t already have one:

host *
  ControlMaster auto
  ControlPath ~/.ssh/master-%r@%h:p

And create the ~/.ssh/rc file:

#!/bin/bash
if test "$SSH_AUTH_SOCK" ; then
    ln -sfv $SSH_AUTH_SOCK ~/.ssh/ssh_auth_sock
fi

And make sure they have the correct permissions for ssh:

chmod 600 ~/.ssh/config ~/.ssh/rc

Finally add the following to your ~/.screenrc file:

setenv SSH_AUTH_SOCK $HOME/.ssh/ssh_auth_sock

I’m not sure it’s the best and most effective way but it’s nice and simple and to date it’s been working well for me, I’ve not had issues with it. Any suggestions for improvement feel free to comment.