Flock 2017

The conference center

This summer we were in the States visiting my family, and that just happened to match up with the fact that Flock was in the States this year (granted, the wrong side of the country, but still well worth the travel). This was the second Flock I’ve attended, and, compared to the last one, had far more of a focus on action rather than just listening to talks.

Flock 2017 was held at the Hyannis Resort and Conference Center in Cape Cod, Massachusetts. I arrived fairly late in the evening on Monday, August 28th (meeting up on the bus with some friends from last year’s Flock). The sessions started early in the morning on Tuesday and continued until Friday at noon.

There were loads of excellent sessions, but I want to focus on two important new technologies that were a major focus of a number of the sessions at Flock, and that I believe are going to change the way we deliver Fedora in the coming years.

Atomic Host

The Atomic series of sessions was a great introduction to Fedora’s Atomic Host, a project that looks to create a more Android-like OS that starts with a read-only base and layers on applications using your container flavor of choice. On a server that flavor might be Docker, while on a workstation, it would probably be Flatpak. The first session, Atomic Host 101, was led by Dusty Mabe who did an excellent job of putting together practice material so we could actually do what was being demonstrated during the session. (This material is available online and the workshop can be done at home, so, if you’re at all interested in Atomic Host, I strongly recommend going through it.)

The beauty of Atomic Host is that updates are, for lack of a better word, atomic. Fedora Atomic guarantees that the update process is either completely applied or not at all. The days of half-applied updates on systems suffering from unexpected power losses are over. There’s also verification that the OS you are running is the OS you installed, complete with diff-like comparisons that show you what configuration files have been changed. And, as an added bonus, if there are problems with your current update, reverting to the previous one is as easy as a single command.

Atomic Host has a couple of experimental features that greatly expand its flexibility regarding the read-only status of the base. One its downsides was that, if you wanted a new or updated system-wide tool, you would have to completely regenerate the base image. Now, installing a new system-wide package is as easy as typing rpm-ostree install <package>, which layers that package on top of the base. Of course, this cool feature did require you to reboot the computer to get access to the new package… until they added the livefs feature which allows you to immediately access newly installed packages without requiring a reboot.

Owen and Patrick discuss Atomic Workstation

I looked at Atomic host and rpm-ostree a year or so ago for our school workstations (which should be a perfect fit for the concept), and abandoned it because there was no way to run scripts after the rpms were installed to the image. I have a number of ansible plays that must be run to get the workstation in shape, and, as I documented here, there’s no way I’m going back to packaging up my configuration as rpms. The good news is that it appears that rpm-ostree has grown the ability to run a post-system-install script that can call ansible, so I think I’m going to give this another shot. Anything would be better than the home-grown scripts I’m currently using.

Modularity

The second new technology that had a strong showing at Flock was the (relatively) new Modularity initiative. I think the first I heard about Modularity was at last year’s Flock in Poland, where, if I recall correctly, Matthew Miller compared packages to individual Lego pieces and modules to prebuilt Lego kits. The idea behind it sounded cool, but it wasn’t until this Flock that I finally understood how it’s supposed to work.

The key idea behind Modularity is that you can combine a group of packages into a module, and release multiple streams of that module in Fedora. So one might have a LibreOffice module with a 5.3 stream, a 5.4 stream and a stable stream. Each stream may have different lifecycle guarantees, which would mean the LibreOffice 5.3 stream would be updated until the last 5.3 stable release, while the 5.4 stream would go all the way to the last 5.4 stable release. The stable stream might track LibreOffice 5.3 until 5.4.0 comes out and then switch. The key limitation behind streams is that, while Fedora might have multiple streams available, you can only have one stream installed on your system at any given time. Streams can be seen as separate DNF mini-repositories with packages that are designed to work well together.

Each stream may also have different profiles, which, in our LibreOffice example, might be default and full. The default profile might include Writer, Calc and Impress, while full might also include Base and Draw. Individual packages might be added or removed from the stream, so you could install the default profile, and then add LibreOffice Draw or remove LibreOffice Calc. Unlike streams, multiple profiles from the same module can be installed on the same system. In this way, they are most similar to the current package groups we have in DNF.

At this Flock, there were daily Modularity feedback sessions where we were talked through some simple tasks (install a module, switch to a different stream, add another profile, etc), and then asked for feedback on the user experience. I found this very effective in getting an understanding on how Modularity works, and the Modularity group did an excellent job of improving on their code in response to the feedback they received.

I did attend a session on how to build a module, but, unfortunately, because of technical problems and our hotel’s high-quality (cough, cough) internet, they didn’t quite have all the pieces in place in time for us to be able to practice making our own modules. I’d love to make a module for LizardFS, but it’s obvious that there’s still a lot of bootstrapping that has to happen before we can get there. Each library it uses needs to be made into a module, so we’re looking at lots of work before even a reasonably-sized fraction of the packages are available as modules. On the flip side, if done right, Modularity gives us the potential for a lot more flexibility in how we use Fedora.

Other odds and ends

A couple of days before Flock, Kevin Fenzi released a PSA about deltarpms being broken in Fedora 26. I attended the Bodhi Hack session with Randy Barlow, and dove right in to try to fix the problem, even though I’ve never touched Bodhi before. I came up with a pull request, but was hitting my head against a problem with the way we were fixing it when Dennis Gilmore made a small change in the mash configuration that fixed the problem in a far simpler way. I do really appreciate Randy’s help in understanding how Bodhi works, his guidance in pointing out the best way to fix the problem, and his patience with my questions. And I’ve come to appreciate his (and Adam Williamson’s) emphasis on making test cases for his code.

Hyannis Beach

I also had a chat with Patrick Uiterwijk and Kevin Fenzi about the feasibility of using casync for downloading our metadata. The advantage is that casync only downloads the chunks that are actually different, but there are major concerns about how much mirrors will appreciate the file churn inherent in using casync. The reductions in download size definitely make it worth further investigation.

All in all, Flock was an excellent place to match faces with names, learn new concepts, meet new friends, and find new ways of contributing back to Fedora. A huge thank you to everyone involved in organizing this conference!

Benchmarking small file performance on distributed filesystems

The actual benches

Update 2018-07-23: There are new benchmarks here

As I mentioned in my last post, I’ve spent the last couple of weeks doing benchmarks on the GlusterFS, CephFS and LizardFS distributed filesystems, focusing on small file performance. I also ran the same tests on NFSv4 to use as a baseline, since most Linux users looking at a distributed filesystem will be moving from NFS.

The benchmark I used was compilebench, which was designed to emulate real-life disk usage by creating a kernel tree, simulating a compile of the tree, reading all the files in the tree, and finally deleting the tree. I chose this benchmark because it does a lot of work with small files, very similar to what most file access looks like in our school. I did modify the benchmark to only do one read rather than the default of three to match the single creation, compilation simulation and deletion performed on each client.

The benchmarks were run on three i7 servers with 32GB of RAM, connected using a gigabit switch, running CentOS 7. GlusterFS is version 3.8.14, CephFS is version 10.2.9, and LizardFS is version 3.11.2. For GlusterFS, CephFS and LizardFS, the three servers operated as distributed data servers with three replicas per file. I first had one server connect to the distributed filesystem and run the benchmark, giving us the single-client performance. Then, to emulate 30 clients, each server made ten connections to the distributed filesystem and ten copies of the benchmark were run simultaneously on each server.

For the NFS server, I had to do things differently because there are apparently some major problems with connecting NFS clients to a NFS server on the same system. For this one, I set up a fourth server that operated just as a NFS server.

All of the data was stored on XFS partitions on SSDs for speed. After running the benchmarks with one distributed filesystem, it was shut down and its data deleted, so each distributed filesystem had the same disk space available to it.

The NFS server was setup to export its shares async (also for speed). The LizardFS clients used the recommended mount options, while the other clients just used the defaults (I couldn’t find any recommended mount options for GlusterFS or CephFS). CephFS was mounted using the kernel module rather than the FUSE filesystem.

So, first up, let’s look at single-client performance (click for the full-size chart):

Initial creation didn’t really have any surprises, though I was really impressed with CephFS’s performance. It came really close to matching the performance of the NFS server. Compile simulation also didn’t have many surprises, though CephFS seemed to start hitting performance problems here. LizardFS initially surprised me in the read benchmark, though I realized later that the LizardFS client will prioritize a local server if the requested data is on it. I have no idea why NFS was so slow, though. I was expecting NFS reads to be the fastest. LizardFS also did really well with deletions, which didn’t surprise me too much. LizardFS was designed to make metadata operations very fast. GlusterFS, which did well through the first three benchmarks, ran into trouble with deletions, taking almost ten times longer than LizardFS.

Next, let’s look at multiple-client performance. With these tests, I ran 30 clients simultaneously, and, for the first three tests, summed up their speeds to give me the total speed that the server was giving the clients. CephFS ran into problems during its test, claiming that it had run out of disk space, even though (at least as far as I could see) it was only using about a quarter of the space on the partition. I went ahead and included the numbers generated before the crash, but I would take them with a grain of salt.

Once again, initial creation didn’t have any major surprises, though NFS did really well, giving much better aggregate performance than it did in the earlier single-client test. LizardFS also bettered its single-client speed, while GlusterFS and CephFS both were slower creating files for 30 clients at the same time.

LizardFS started to do very well with the compile benchmark, with an aggregate speed over double that of the other filesystems. LizardFS flew with the read benchmark, though I suspect some of that is due to the client preferring the local data server. GlusterFS managed to beat NFS, while CephFS started running into major trouble.

The delete benchmark seemed to be a continuation of the single-client delete benchmark with LizardFS leading the way, NFS just under five times slower, and GlusterFS over 25 times slower. The CephFS benchmarks had all failed by this point, so there’s no data for it.

I would be happy to re-run these tests if someone has suggestions on optimizations especially for GlusterFS and CephFS.