Configuring HTTP/2 with Apache on Fedora

HTTP/2 is the new version of the well known HTTP protocol which has been at the venerable 1.1 since late last century. Version 2 was derived out of Google’s SPDY protocol and it’s a binary protocol over the text based 1.1. It introduces a bunch of improvements including reducing latency, multiplexing, and server push. There’s some useful improvements that will be great for things like apps that use WebSockets. The Apache httpd daemon has included complete support for HTTP/2 since the 2.4.17 release in the form of mod_http2.

First you should configure your site with SSL, I suggest using LetsEncrypt/certbot as documented in this Fedora Magazine article.

Then you need to make sure the module is loaded, at least in Fedora 25 this is enabled in /etc/httpd/conf.modules.d/00-base.conf by default:

LoadModule http2_module modules/mod_http2.so

Then you just need to enable the protocol in either the general configuration or in specific VirtualHost directives for specific sites:

# for a https server
Protocols h2 http/1.1

# for a http server
Protocols h2c http/1.1

Then it’s just a systemctl restart httpd to make the changes take effect.

To test whether you’re serving over HTTP/2 you can test using this HTTP/2 testing site or with the OpenSSL client (check for “ALPN protocol: h2” in the output) with the following command:

openssl s_client -alpn h2 -connect HOSTNAME:443

Note: HTTP/2 is not currently supported in the httpd shipped in RHEL.

Getting started with Zephyr on Fedora

So while Fedora is great for a lot of IoT use cases it can’t be used everywhere, such as on tiny micro controllers such as an ARM Cortex-M series or Intel Quark micro controllers, but that doesn’t mean that Fedora doesn’t make a fantastic developer platform for working with these devices.

I have a handful of Zephyr capable devices (BBC Micro:bit, NXP FRDM-K64F, 96Boards Carbon, TI CC3200 LaunchPad) so how can you get a build environment up and running quickly so you can start doing real development as quickly as possible.

In testing this I used a Digital Ocean cloud instance for a build host. Wherever you choose to build it make sure you have at least 2GB of RAM available as from my experience you need at least 2GB for building a Zephyr image.

From there we diverge a little from the upstream notes by installing the Fedora ARM cross compiler (only tested with ARM, not sure of state of other targets) and developer tools:

sudo dnf install git-core gcc gcc-arm-linux-gnu glibc-static libstdc++-static make dfu-util dtc python3-PyYAML

Next up we clone the upstream Zephyr git repository:

git clone https://gerrit.zephyrproject.org/r/zephyr zephyr-project

If we want to use a particular stable branch we now switch to the chosen branch. I’m using the latest stable release branch:

cd zephyr-project; git checkout v1.7-branch

Set up the cross compiler variables:

export GCCARMEMB_TOOLCHAIN_PATH="/usr"
source zephyr-env.sh
cd $ZEPHYR_BASE/samples/hello_world

Select and build our target:

make CROSS_COMPILE="/usr/bin/arm-linux-gnu-" DTC=/usr/bin/dtc BOARD=96b_carbon

If we’re developing this on our local machine we can now just directly flash the new build straight to the device. To do this we connect a micro USB cable to the USB OTG port on the Carbon and to your computer. The board should power on. Force the board into DFU mode by keeping the BOOT0 switch pressed while pressing and releasing the RST switch.

Confirm DFU can see the device:

$ sudo dfu-util -l
dfu-util 0.9

Copyright 2005-2009 Weston Schmidt, Harald Welte and OpenMoko Inc.
Copyright 2010-2016 Tormod Volden and Stefan Schmidt
This program is Free Software and has ABSOLUTELY NO WARRANTY
Please report bugs to http://sourceforge.net/p/dfu-util/tickets/

Found DFU: [0483:df11] ver=2200, devnum=8, cfg=1, intf=0, path="2-1", alt=3, name="@Device Feature/0xFFFF0000/01*004 e", serial="123456789"
Found DFU: [0483:df11] ver=2200, devnum=8, cfg=1, intf=0, path="2-1", alt=2, name="@OTP Memory /0x1FFF7800/01*512 e,01*016 e", serial="123456789"
Found DFU: [0483:df11] ver=2200, devnum=8, cfg=1, intf=0, path="2-1", alt=1, name="@Option Bytes  /0x1FFFC000/01*016 e", serial="123456789"
Found DFU: [0483:df11] ver=2200, devnum=8, cfg=1, intf=0, path="2-1", alt=0, name="@Internal Flash  /0x08000000/04*016Kg,01*064Kg,03*128Kg", serial="123456789"

Flash our build onto the device:

sudo dfu-util -d [0483:df11] -a 0 -D outdir/96b_carbon/zephyr.bin -s 0x08000000

Now connect another micro USB cable to the UART port and run a console:

sudo screen /dev/ttyUSB0 115200

Hit the reset button and you should see the following output:

***** BOOTING ZEPHYR OS v1.7.1 - BUILD: Jun  6 2017 14:07:24 *****
Hello World! arm

Now we have a basic development environment setup, know we can build, flash and run a release on the 96boards Carbon next time we can do something more advanced 😉

Update (2017-06-13): Minor updates to dependency installs and make command

WiFi on Raspberry Pi 3 for Fedora 26 Alpha

So I managed to land just about everything needed for the WiFi on the Raspberry Pi 3 for Fedora 26 Alpha (around 4.11 rc3). There’s one thing missing, because we can’t currently redistribute it, but it’s straight forward for the end user to do themselves once they’ve done the initial setup:

sudo curl https://raw.githubusercontent.com/RPi-Distro/firmware-nonfree/master/brcm80211/brcm/brcmfmac43430-sdio.txt -o /lib/firmware/brcm/brcmfmac43430-sdio.txt

Or you can also do it when you’re flashing the image if you mount the root filesystem but the above is likely easier. It’s been surprisingly stable in my testing.

Before you all ask, at the moment I don’t plan on pushing this to earlier Fedora releases, as the upgrade path is not trivial. I will also soon publish more details of some of the other new features coming for the Raspberry Pi to Fedora 26 but I thought you’d all like the WiFi details now. The wiki has also been updated to reflect the status of the WiFi.

PS: No this is not an April Fool’s joke (it’s well past midday in UK).

Updating Raspberry Pi firmware on Fedora

The upstream Raspberry Pi firmware/bootloader gets regular updates and improvements. In Fedora we ship that firmware in a package called bcm283x-firmware. I regularly follow the git repo of the upstream firmware and on occasion when I believe there’s reasonable changes that benefit Fedora I’ll prepare a new version, do some brief testing on my devices to make sure it boots and basic functionality hasn’t regressed at which point I’ll update the package and send it out to supported releases as an update.

Once the new bcm283x-firmware lands on your Raspberry Pi it doesn’t automatically update the firmware though. Why is that you ask? I don’t like to spring surprises on people where they end up with a device that might not boot or it might regress things they care about.

So how do you upgrade the firmware for the Raspberry Pi on Fedora? It’s simple! You simply run the command rpi-firmware-update and it’ll update the firmware and the u-boot to the latest one that’s shipped as a Fedora package. Then you just need to reboot to make it active.

The easiest way to work out which firmware you’re currently running is “dmesg | grep raspberrypi-firmware”

I tend to try and push out a new firmware update every month or so but if I see something that’s of interest or that fixes known issues I do it as needed.

Useful commands for manipulating PDFs on Fedora (any linux really)

So it’s not much of a secret that the Red Hat expenses system is truly terrible. Not a well known is the EMEA accounts team still require what I call “Arts and Crafts sessions” (all receipts attached to bits of paper and scanned as a whole) even though there’s no legal requirement for paper receipts to be provided any more in the UK/Ireland/EU!

Anyway the system regularly routes the emailed PDFs to /dev/null for no apparent reason and then you have to scratch your head and try and work out what’s wrong.

Size: this is the regular issue, basically if the PDF is larger than a few Mb it barfs. Thankfully ghostscript comes to the rescue here.

gs -dNOPAUSE -dBATCH -sDEVICE=pdfwrite -dCompatibilityLevel=1.4 -dPDFSETTINGS=/screen -sOutputFile=new_file.pdf original_file.pdf

The -dPDFSETTINGS setting has a few options:

  • /screen selects low-resolution output and hence the lowest file-size.
  • /ebook selects medium-resolution output with a medium file-size.
  • /printer uses high-resolution option which is mainly used for printing PDFs.
  • /prepress) similar to /printer but gives you the largest files.

Too many pages: Airline bookings are the great ones here, they add pages of adds to a one page receipt. Two pages are either to “print” just the page you need, or use oowriter (Libre Office Writer) to open it, delete the pages and export as PDF again.

Multiple PDFs: In theory the system can handle multiple docs. My millage has varied a LOT here. Easy fix comes from the poppler-utils package:

pdfunite doc-1.pdf doc-2.pdf doc-3.pdf out-doc.pdf

PDF versions: I have found 1.4 be the most effective here. Ghostscript comes to the rescue again here:

gs -sDEVICE=pdfwrite -dCompatibilityLevel=1.4 -dPDFSETTINGS=/screen -dNOPAUSE -dQUIET -dBATCH -sOutputFile=output.pdf input.pdf

Adjust the level -dCompatibilityLevel to the version you need.

Connect to a wireless network using command line nmcli

I use a lot of minimal installs on various ARM devices. They’re good because they’re quick to download and you can test most of the functionality of the device to ensure it’s working or to quickly test specific functionality but of course it doesn’t have a GUI to use the nice graphical tools which are useful to quickly connect to a wifi network or other things.

This where nmcli comes in handy to quickly do anything you can do with the GUI. To connect to a wireless network I do:

Check you can see the wireless NIC and that the radio is enabled (basically “Airplane” mode):

# nmcli radio
WIFI-HW  WIFI     WWAN-HW  WWAN    
enabled  enabled  enabled  enabled 
# nmcli device
DEVICE  TYPE      STATE         CONNECTION 
wlan0   wifi      disconnected  --         
eth0    ethernet  unavailable   --         
lo      loopback  unmanaged     --         

Then to actually connect to a wireless AP:

# nmcli device wifi rescan
# nmcli device wifi list
# nmcli device wifi connect SSID-Name password wireless-password

And that should be enough to get you connected. You can list the connection with nmcli connection and various other options. It’s pretty straight forward. The only complaint I have is that it doesn’t prompt for a password if you leave it out so it means the AP password is stored on the command line history. Not a major given it’s quite easy to find where it’s stored anyway on the system but it would be a useful addition.

When creating updates remember to build for rawhide and Fedora 25 (devel)

When ever we branch for a new release of Fedora I, and others, end up spending a non trivial amount of time ensuring that there’s a clean upgrade path for packages. From the moment we branch you need to build new versions and bug fixes of packages for rawhide (currently what will become Fedora 26), for the current stabilising release (what will become Fedora 25) as well as what ever stable releases you need to push the fix for. For rawhide you don’t need to submit it as an update but for the current release that’s stabilising you do need to submit it as an update as it won’t just automagically get tagged into the release.

As a packager you should know this, it’s been like it for a VERY LONG TIME! Yet each cycle from the moment of branching right through to when a new release goes GA I still end up having to fix packages that “get downgraded” when people upgrade between releases!!

So far this cycle I’ve fixed about 20 odd with the latest being bash-completion (built but not submitted as an update for F-25) and certmonger (numerous fixes missing from F-25 and master branch).

The other silly packaging bug I end up having to fix quite a bit is NVR downgrades where even though it’s a newer package the way the NVR is handled makes rpm/dnf/yum think the newer package is a lesser version than the current version and hence you’re new shiny fix won’t actually make it to end users. I see this a lot where people push a beta/RC package to a devel (F-25/rawhide) release. Just something to be aware of, there’s lots of good docs around the way rpm/dnf/yum handles eNVR upgrades.

Fedora 24 released on all architectures simultaneously!!

So for the first time ever we’ve released Fedora 24 across both primary and alternate architectures pretty much simultaneously! That’s the three primary architectures, x86_64, ARMv7 and i686, plus the alternate architectures of aarch64, ppc64, ppc64le and s390x. This is the first time we’ve ever released SEVEN architectures on the same day!

Fedora 24 from a Release Engineering perspective has been fairly momentous, we’ve made the single biggest change to our tooling that composes the release since Fedora Core and Fedora Extras was merged in Fedora 7! The plans for this change had been long discussed and first started to move back in the Fedora 21 cycle with the hope it could be implemented in Fedora 22…. boy were we wrong! But on the flip slide, while it’s still not perfect, it has massively improved the process for the releases. We now FINALLY have a full compose every single night! No more Test Composes! It now means QA can automate tests against any bit of the compose output, it means the installer team can see the results of their changes the next day. For the average user this has no material visible impact, for those much closer into the process it has a hugely positive impact!

From the alternate “secondary” architecture perspective I’ve ticked of a massive amount of “goal check boxes” that I made for myself when I joined this role almost two years ago.

My big ticket item was “make the release process like primary”. Our release process was a lot more manual than “primary” but we had also a lot less “features”. Well with this release neither are now true. We’re not 100% of the primary process, but the difference is tiny. We have a full nightly compose now, like primary, where previously we basically had a “newRepo” process we now have a full installer stack with network and iso installs produced. This in itself is a massive improvement! To this we add docker to aarch64 and Power64, cloud to aarch64 (and more automation of cloud to Power64), and initial “tech preview” disk images to aarch64 for Single Board Computers (I’m sorry I really did want to nail down this feature but even I have to sleep!) like the Pine64.

Some of the “smaller” tick box improvements to non primary arches is that we’re now fully 100% ansible run for the aarch64 and Power64 infrastructure, and only have a few minor bits to work out for s390x. This means from the back end everything is 100% like primary and orchestrated as such, the hubs all looks the same and the experience is consistent. Changes are deployed simultaneously and hence consistently. YAY automation! We’ve also simplified the way the secondary content gets to the mirrors so it’s more consistent and faster!

On the not x86 architecture front we’ve got a whole bunch more exciting features planned, and improvements to make, for the Fedora 25 cycle and beyond, some of which will begin happening very soon. Watch this space, the second half of the year looks to me to be just as frantic as the first!!

Getting started with MQTT using the Mosquitto broker on Fedora

MQTT is a lightweight publish/subscribe messaging transport designed for machine-to-machine “Internet of Things” connectivity. It’s been used in all sorts of industries from home automation and Facebook Messenger mobile app to health care and remote monitoring over satellite links. Its ideal for mobile apps and IoT because of it’s small footprint, low power usage and small data packets. You can connect to a MQTT Broker over standard TCP/IP ports with or without TLS or over Web Sockets. There are many cloud based MQTT services such as OpenSensors, CloudMQTT, HiveMQ, AWS IoT or Azure IoT Hub but where is the fun in that?

There’s a few message brokers that support MQTT in Fedora including RabbitMQ or ActiveMQ (the version in Fedora is old but it would be nice if someone did update it to the latest version!) but for this I’ll be using Mosquitto as it’s small and relatively straight forward for home and demo usage.

Initial Install
I’ve tested this on a Digital Ocean Fedora 23 VM but for this I’ll be using a Beagle Bone Black with a Fedora 24 Beta minimal image but of course you can run it on any recent Fedora release on any supported device, ARM or otherwise.

After installing Fedora and getting it running on the network install and configure mosquitto:

  dnf install -y mosquitto screen
  mv /etc/mosquitto/mosquitto.conf.example /etc/mosquitto/mosquitto.conf
  firewall-cmd --permanent --add-port=1883/tcp
  firewall-cmd --permanent --add-port=1883/udp
  firewall-cmd --permanent --add-port=8883/tcp
  firewall-cmd --permanent --add-port=8883/udp
  systemctl enable mosquitto.service
  systemctl start mosquitto.service

By default the default settings are relatively straight forward, it runs under the mosquitto user, listens on all interfaces, with config options in /etc/mosquitto. In there is a well documented example config file called mosquitto.conf.example which contains all the default settings. Above we rename it to mosquitto.conf to use as a base config for us to modify. There’s details on using the Let’s Encrypt service on the mosquitto site. If you’re using it on the public internet I’d strongly suggest you configure TLS.

Testing it works
Now that we have a basic install done lets make sure it works and get a basic idea of how MQTT works. Included in the mosquitto package is a couple of client utilities useful for testing the MQTT broker pub/sub functionality. Start up a screen session and in a window run the following commands which subscribes us to the “test/topic” topic to the MQTT broker running on localhost:

  mosquitto_sub -h localhost -t "test/topic" 

Now lets publish some data to that topic. In another screen tab run the following command which will publish a string of text to the topic:

  mosquitto_pub -h localhost -t "test/topic" -m "Hello World!"

You can run multiple subscribers either from the screen session or remote from a different host but remember to change the “-h localhost” to appropriate hostname/IP if you’re testing remotely.

MQTT can also use two types of wild cards when subscribing to topic names. So to be able to read all temperature sensors publishing to a root topic you could use “home/+/temp” which would provide “home/bedroom/temp” and “home/livingroom/temp”. The “+” will match one hierarchical level where as “#” stands for everything deeper so “home/garden/#” would provide all sensors from the garden. When using wild cards you’ll need to know which topic is reporting so use mosquitto_sub in verbose mode:

  mosquitto_sub -h localhost -v -t "test/#"

Quality of Service
One of the killer features of MQTT for low power IoT sensor networks is that it’s a “store and forward” protocol with a couple of options for quality of service (QoS) guarantees. Once the node has registered with the broker any time it sleeps the broker will queue messages and take care of delivering that message to the node as soon as it reconnects. This means the node can sleep as long as needed to preserve battery life and just wake up and connect when it has data to send or at a predetermined interval without ever missing a message.

There’s three things needed to support QoS: 1) The subscriber needs to be registered by ID with the broker 2) The subscriber needs to disable clean-slate behavior (the default is enabled) 3) Both the publisher and subscriber need to be using quality-of-service level one or two so the broker knows it needs to store the messages. More details on QoS levels are covered here.

Here’s a small demo to play with QoS basics. First connect a subscriber to the broker supplying an ID, disable clean slate and enabling QoS level 1:

  mosquitto_sub -h localhost -v -t "test/#" -c -q 1 -i "PotPlant"

Then send the following two messages, one each with and without QoS. You should see them both show up on the subscriber:

  mosquitto_pub -h localhost -t "test/topic" -m "foo" -q 1
  mosquitto_pub -h localhost -t "test/topic" -m "bar"

Now stop the subscriber (Ctrl+c) and while it’s stopped send both of the messages again before restarting the subscriber. When it reconnects you should just receive the “foo” message because it’s the only one specifying a QoS level.

Further Reading
There’s a lot of information about MQTT on the web. The MQTT Essentials page from HiveMQ is a good place to start for further information and Awesome-MQTT is a curated list of MQTT related stuff with everything from language bindings to gateways/plugins for integration with various projects whether they be home audio or enterprise products.

Fedora 24 Alpha for aarch64 and POWER

So Fedora 24 Alpha is out for aarch64 and POWER. Keen followers will note that we were a couple of days behind the primary architecture’s Alpha release, which hasn’t been the case for the last few Fedora cycles where we’ve generally released on the same day.

The primary reason for the delay was the Pungi Refactor. While the pungi 4 change has been massive for primary architectures for the secondary architectures it’s the single biggest change to our release process EVER! Basically we’ve thrown the lot out and started again. When I started in release engineering over 18 months ago the number one goal that was set for me can be summarised as “Be more like primary. Make the whole secondary architecture as close to primary as possible!” and we’ve been continuously moving, albeit not as fast as I would have liked, in that general direction. With the arrival of pungi 4 for Fedora 24 we’re almost at that end goal in terms of the current way we do secondary architectures.

With Fedora 24 we’re also adding a lot more release engineering focused features and functionality to the secondary architectures. We have now have full nightly composes on rawhide and branched whereas previously we’d just produce a “Everything” repo. This allows ongoing continual testing on things so it’s easier to know when things regress. On PowerPC we’ve produced qcow2 cloud images to some degree since Fedora 22 but it was a bit of a manual process. These are now fully integrated into the pungi/koji process and, like on primary produced nightly, similarly they’ll be coming to aarch64 very shortly too. In Fedora 24 we’ve added Docker base images, they’re produced nighly on branched and rawhide for PowerPC now, and will be nightly for aarch64 at the same time the qcow2 cloud images arrive. Finally aarch64 will also soon have disk images like ARMv7 on primary to enable us to easily support the new shiny aarch64 Single Board Computers (SBCs) that are _FINALLY_ becoming available for the architecture, for Fedora 24 it’ll be a bit of a hack, but with Fedora 25 both ARMv7 and aarch64 will be able to move to koji based live-media-creator image build process but I’ll outline more of that in another post.

So the pungi refactor has been big for the secondary architectures. It’s required big changes in our infrastructure which is now mostly complete, there’s a few infrastructure cleanups and final changes that are in process, these will be done in the next few weeks in the lead up to Beta. We have a single host left to migrate to ansible (YAY!!) and some final moving around of resources. We’ll be changing the way we sync content out to mirrors too which will close out one of the final deltas of the rel-eng secondary process. Overall the last few weeks have been challenging getting all the bits in place, but by the time we hit Beta it’ll all be complete! The new processes lay the foundations for the secondary architectures to add functionality quicker than ever before, and by being almost identical to primary the “onboarding” of new people to use that process, or end users be able to consume the output of the rel-eng process is easier than ever before and that makes me happy! 🙂