sammorrowdrums

Drummer, software engineer and online-learning fanatic.

So, I decided to bolster my security this year, and invest in a YubiKey. One of its less obvious features is OpenPGP support; this means as well as U2F and TOPT authentication, you can use it for all kinds of cryptographic possibilities. Those familiar with SmartCards will probably already be familiar with what they can do, and that is the important point. The YubiKey is also an OpenPGP Smartcard.

TL;DR

If you're time poor, you might not have time to set up PGP properly, so as a first step consider using Signal and ProtonMail (they are both secure and free). At least then you can be confident you are set up to communicate via encrypted channels.

Your motivation to set up PGP keys will most likely depend on need such as being a journalist, wanting to sign your code or perhaps you are just a curious geek. It seems today few people are using PGP encrypted e-mail, and of those that are, probably most are using services like ProtonMail which is a good step, but doesn't let you use your own master key without uploading it*, which complicates things if you require the highest level of security.

* It will be stored encrypted with your password though. So they never have access to it. Keeping your certification key off the internet entirely is possibly overkill for most users.

That being said – I think if you are willing to try, the more private communications there are in the wild, the less they stick out – so you help to mitigate mass surveillance operations, and you have secure channels of communication ready, should the need ever arise (which often becomes much more difficult to do safely / legally, if that day comes).

The Basics

PGP allows you to sign, encrypt and authenticate (and certify) basically anything. This covers numerous activities such as:

  • encrypting / decrypting messages such as emails
  • signing git commits & software releases
  • authenticating via SSH
  • proving that all of the above were done by you

To get the full benefit of PGP, you need to keep your master key safe – as if that is compromised, you lose nearly all the benefits of using it in the first place. Obviously the greater your personal need for operations security, the higher the risks of somebody targeting your key. The PGP system was designed to easily protect your information, even from threats like nation states, so much of the advice you will see on PGP is uncompromising – and is focused on how to maximise security.

In practice, few of us are in need of Edward Snowden levels of security – although if your opsec is strong (especially for journalists), you might be more likely to have a high security source trust you with their leaks or tips.

Generating a Master Key

Already we are faced here with options, and I think this is what puts off most people right from the start. You really need to use the command line, decide what key length (and algorithm) you are going to use, and decide where you are going to generate your key.

I chose to generate a 4096-bit RSA key.

The main options are:

  • Just using your current machine
    • This is the easiest, but is also the least secure, as a compromised machine could leak your key to the world
  • Directly on SmartCard / YubiKey
    • This is simple and secure (as long as nobody can hack your key)
    • Even you cannot remove the master key from your YubiKey, and risk losing access
  • Using an offline system
    • This involves having a live operating system from something like a USB drive, and while you might upgrade stuff using the internet, or install tools to manage your YubiKey, you pull the internet cable before generating your key, and you shut down the machine without internet when complete.
  • Offline system in an air-gapped room
    • This is the Snowden level option, if you think that you are being targeted by the most advanced capabilities, then you would literally need to scrub a room (that functions as a faraday cage) of all electronics, and probably better using a single write medium with no electronic components like a DVD-R for the live operating system, and you would need to leave your phone out of this room, and then you should probably destroy and scrub everything after generating the key, and even then you would need to be careful to encrypt your messages in a similarly safe environment if possible, because frankly at this point people are potentially trying to hack you, add keyloggers, and find easier ways of breaking your encryption than actually having to decrypt your messages. Which you will never 100% guarantee they won't be able to do, as the three letter agency types of the world would not tell you if they could – although the strongest available encryption is the best you've got, and it probably will work.
    • I mostly add this option as a joke, because obviously organisations like the NSA and GCHQ have this sort of practice baked into their offices along with all the physical security etc. so even if you were to try this at home, without the resources to manage this level of security – you would probably fall down somewhere, but we can all dream right!?

I opted for the offline system, and ran into a few problems, such as my laptops WiFi card not being supported by the stock Debian live USB, and my other machine (which has cabled LAN) didn't have USB-C ports (which my YubiKey does). In the end I bought a USB-C (f) –> USB A adapter and used the LAN. This worked well, and I was able to get the packages I needed, pull the LAN cable, and then generate my key following these instructions.

The important thing is that the instructions are really precise and often if you get a step wrong, you will need to start from scratch.

I then loaded up my laptop and (having made a USB key with an encrypted partition with private keys and revocation certificate, and an un-encrypted partition with the public key, as per the guide), tried to load my public key onto my machine, so I could start using my YubiKey – which had the signing, encryption and authentication keys loaded onto it now. Sadly my public key partition didn't mount and I had already shut-down the offline system so I had to load the live disk back up install some packages again and pull the LAN cable, and then load up my encrypted partition (which did mount), and restore my keys – this is the only copy of the master key, emergency revocation key and public key – so losing that would have meant starting from scratch. Once I had fixed my issues with the backup USB key, I was able to restore my public key to my laptop, and continue.

Backing it up

Now, I had so far neglected to make any other backup of my keys other than the encrypted USB stick, and bit-rot is a genuine risk, so I also wanted to make physical backups – and so I used a tool paperkey to print them using my live USB, and restoring keys from the encrypted USB, and obviously this time I also needed to install printer drivers and things. Technically printers are a risk too, so for the ultra-paranoid you might want to either not have a backup and risk getting locked out of your own keys, or find some more reliable medium. For convenience I would have backed up my master key with a QR code tool, however the 4096 bit RSA key I generated was too big to QR – so for that I would have OCR scan it to revive the key.

Using the key

The first thing I did was submit my public key to a few keyservers which enables other users to trivially discover my public key, which is all they need to send me encrypted messages – and importantly means if I for some reason lose it, I can download it again myself. I then followed some examples of encrypting messages to myself, and decrypting them on the command line – just to try it. That all worked well.

I was initially excited to try and use my keys with ProtonMail, which has full PGP support and is my current e-mail provider, only to discover that it would require that I upload my private key the way it currently works – so I just had to leave it in a situation where I have different public keys for my email accounts in ProtonMail, and I don't really want to leave the service – but I was frustrated that I couldn't use it.

I then also got excited about Social Proofs (which is a mixture between actually verifying ownership of various accounts, and stamp collecting). I tried keybase.io – it worked, and while it did ask me to optionally upload my private key, it was able to do all the “proofs”, with me signing the actions on my local machine. The purchase of Keybase by Zoom however put me off, and various practices like mounting drives and adding stuff to startup made me remove KeyBase entirely from my system – unfortunately they just don't seem to have managed to maintain trust, respect and sufficient usage for their (originally noble goals) to pan out in the long run. I did like their idea though.

Next I tried a distributed social proof tool called KeyOxide – here are my proofs (should you wish to contact me). KeyOxide is less automated than Keybase, doesn't have file sharing and private messaging, but it also doesn't need you to create an account or upload anything, doesn't need apps, and its proof system works well.

Of course, social proofs and identities matched to your cryptographic identity are not for people flying under the radar – but it is a mistake that cryptography is only for those people. I'm happy that I have a way to prove that I am who I say I am online, and therefore disavow any attempts to impersonate me.

Setting up commit signing

Signing commits is pretty easy, and simply requires that you point git to a public key file, and tell it to sign all commits in the config. Then you'll need your YubiKey to commit anything, which will also prompt you for a pin unlock the first time each session (if you don't want that, pull the YubiKey after each commit). Then you just need to upload your commit on Github, and hey-presto, all your commits show as signed and verified.

SSH

So now you have a YubiKey set up with PGP keys, and therefore you can use it for SSH. You will need to set up gpg-agent, and do a little configuration, but then you can view your public key ssh-add -L and the YubiKey entry which will be suffixed with something like cardno:000123456789 – add that to the usual places (~/.ssh/authorized_keys on servers and in your Github account), and then you can start using SSH on any machine with gpg-agent simply by carrying around your YubiKey. This saves you from having a private keyfile stored on your machine, reducing your risk of the key being compromised.

Summing up

This was a lot of work, and transferring all my 2FA to YubiKeys was already quite a lot of effort, but I really wanted to see if it has gotten easier to to use PGP, and I have found that while the technical instructions are increasingly good, and the hardware keys work very reliably – it is still just too much for the average person. I got my family to move to using Signal, as the combination of e2e encryption, metadata encryption and open source gives me a significant degree of trust for our usage, and the barrier of entry is dramatically lower. Of course, you then are using a centralised service on a smart phone, which means that you are not necessarily safe from law enforcement or nation states who could probably gain access to your phone through other means such as known exploits, if they had reason to – but you are also safer from bulk data collection and metadata collection.

I personally will continue to use my offline key generated subkeys to connect to SSH, and sign commits – and I'm happy to receive encrypted messages to my public keys (including my ProtonMail ones which I attach to all my outgoing emails) but, for me this was still more of an experiment to see what it is like to set up, and one that I am uneasy about recommending to anyone who is not very technical or doesn't have a deep personal need. I am glad that e2e encrypted apps at least have gained significant traction, and https everywhere is a major success. We are at least beginning to see widespread public adoption and usage of encryption technologies – but SmartCard stuff is simply not going to be adopted by people who don't feel that it is essential for them, and genuinely secure key generation is likely seen as tinfoil hat stuff, by the majority of the public.

As mentioned above all you need to reach me is available here, and I'm happy to receive your attempts at sending secure messages. The Free Software Foundation also has an excellent guide to PGP e-mail, which includes an e-mail bot edward-en@fsf.org that will help you to troubleshoot any problems you might face!

TL;DR

Do you need one? I'd suggest you get two, probably better than dealing with identity theft or phishing.

Are they easy to use? Yes, incredibly easy for services that support the simple button press / NFC touch auth (U2F authentication) and allow you to register multiple keys like Facebook and Gmail, for example. There are other auth methods available, but they do have additional steps.

How do they work? They fulfil the 'something you have' criteria of keeping your accounts safe, in a secure enough way that it is likely near-impossible for somebody to impersonate you.

Which one should I get? Two YubiKey 5s with NFC and USB-C (if you have USB-C laptops / computers and phone that supports NFC which most Android and iPhone models now do), otherwise possibly the regular USB/NFC model.

Are they worth the money? I think you'd have to ask yourself that after somebody hacked your accounts, because they are quite expensive, but you will probably avoid getting all your online identity hacked.

My Motivation

Scared that if I lose access to the Google Authenticator App, I risk losing access to several of my accounts, I wanted a better solution. With Google Authenticator if I brick my phone (which I have done a few times before), the codes it provides are lost for good. I decided it was time to take the plunge and get a Yubikey.

Or rather two YubiKeys...

The first thing to learn about YubiKeys (and other security keys) is that you really need at least 2. You don't want to have only one that provides access to all your accounts in case you break or lose it. There is just too much risk of being locked out of your own accounts (which could be a positive if the risk of somebody with physical access to your stuff gaining access to your key was greater than losing your own access – i.e. probably very few of us). Of course, setting up two keys adds a major constraint because you need to have them both physically present when you register them as a 2FA device, and then you need to be able to get the hot spare when you lose / break the other. Also it is worth considering where to keep the second key, as each option has significant trade-offs.

  • Husband/wife/family member
    • Easy and simple option
    • Might be risky if you both end up losing / breaking device in the same incident
    • Could enable you to allow family to access your accounts should anything happen to you
  • Personal Safe
    • Almost none are actually able to survive house-fires
    • Expensive
    • If you are still protected by password, who are you defending yourself from?
      • Nation States and law enforcement could likely access this anyway.
      • Thieves should still not have the passwords to your accounts, so no loss
      • Cyber criminals (the main group YubiKeys help ward off), hopefully aren't coming into your home
  • Bank safe deposit box
    • Bond movie levels of cool
    • Pricey
    • Increasingly phased out by banks
    • Difficult to access
    • Do your own research on this, but I wouldn't recommend – there are some surprising stories of people still losing all the stuff in them, in spite of the supposed security
  • Car
    • Potentially able to get it away from a situation where you house is burning down
    • Car thief still doesn't have your passwords so probably no great risk

The above examples are just a playful brainstorm, but after the California Wildfires, many people have questioned their digital backup plans immensely as some people have literally had their house, including safes and even local banks, all burn to the ground.

Luckily (for most people), the majority of services will be able to return access to you, resetting your 2FA – upon proving your identity somehow. This account reset/restore option is often just your email address, so it is important that you ensure your main or backup email accounts are well-secured, and often you can have additional second factors like mobile phone numbers. It is probably worth using this, in spite of the lower security of SMS two-factor.

Main Types of Auth

U2F

This is excellent. You get all the benefits of the security key, with a pretty seamless user experience baked into most browsers, phones and computers. You just touch or insert and tap the key. Often you can easily register multiple hardware keys to your account, so setting up your backup key(s) can be done easily.

TOTP

One time pass-codes are the most common form of secure 2FA. Many services support this, and there is a Yubico Authenticator app (and desktop version) that will enable you to store them on your key, and access them by inserting via USB or touching the key to phone. The biggest hassle is that for a backup you really need to scan the 2FA setup barcode twice, so you can add the code to your backup key as well as your main key. It is easy to do this with the app, but you have to remember to do it before moving to the next step, as they will never show you that code again. An alternative is to store that setup code in a password manager, but that will (to some extent) weaken your security, as if a hacker gets access to your password manager, they will then have both your 2FA and account passwords – and so it at least partially reduces the security you get from 'something you have' not being online.

Local Machine

While more for advanced users / secure workplaces, you can also set up your computer to require your key to unlock, access the admin etc. I must confess I've only done it on Linux, and it was as simple as installing a couple of packages and editing a couple of lines of certain authentication management files (with instructions from a guide Yubico provided online). As with all the above, but with even greater emphasis: make sure to have a backup key added, as losing access to your computer can be an extremely annoying occurrence, that might result in your having to re-install the operating system from scratch.

If you enable this, you are much less exposed to people in your office / house being able to access your machine while you briefly step away, and even if they have watched you type your password, or recorded it with a keylogger – they need the key and the password, so it at least makes it much more difficult to obtain both parts required to log in to your machine.

A big plus is that you can also mitigate the action movie risk scenario that iris and fingerprint auth can be accessed by cutting out/off those body parts. At least handing over your YubiKey is less painful.

Remote Machine (SSH)

If you don't know what this is, it is probably not useful to you, but it is possible to use a YubiKey as an OpenPGP Smartcard and generate PGP keys inside the key (so the master key can never* leak), or you can put your own subkeys on the security key, and then you can use GPG-Agent to enable using it for SSH. The benefit of this is that if you put the same PGP keys on both your keys, they can both be used for SSH access (although you need to run a command to reset the agent to switch between keys on same machine), but now you have a hot SSH spare at least, so you can still access your servers.

You need to unlock the key with a pin, so it is not trivial somebody to steal the key and gain SSH access. and there are only 3 attempts. You can unlock with an admin pin – but if you fail that 3 times, you have no choice but to fully reset OpenPGP on the card. Going even more technical, if you have an additional SSH hop, it is worth mentioning that SSH agent-forwarding is complex, with various security risks, so I won't cover it here – but it is possible to achieve.

This should now mean that you can set up ssh access to your services on any machine without having to transfer your private key file. You just need to set-up gpg-agent.

I'll go more into my struggles with the PGP stuff in an article about encryption, email and YubiKeys soon. It is certainly still .

Conclusion

I do think that the benefits of YubiKeys outweigh the hassle, and are a good (albeit expensive) way to reduce your surface area for identity theft and protect your accounts. They are complementary to a good password manager (which if you don't have, I'd recommend setting up first) – as what you really want to prevent is one account hack leading to multiple compromised accounts, so using the same password anywhere is ill-advised, and the addition of a 2FA will then further reduce the risks of mass-compromise.

Servers can still be hacked, so security keys don't actually increase the trust you should place in websites and apps that you give your data to, however they are a great defence against increasingly common and sophisticated phishing attacks, and as cyber criminals get more advanced, this is a strong step in limiting the ways that they are able to target you. I like to compare this sort of security with bike locks. You cannot buy a perfect (yet practical) bike lock – but you can have a better lock than other bikes on the street. Most criminals are going for the easiest wins, and simply by making yourself a more difficult target, you greatly reduce the number of people who would/could bother.

Finally, the less technical the person – the greater the risk of phishing and poor passwords – so if you can help someone who you think would likely click on malicious email links set up and use 2FA for the most crucial accounts, you probably should. U2F is genuinely quite easy to use, and simply requires tapping the key against your phone, or shoving it in the USB slot and touching it.

* hopefully

So you've got a big client side React App, and you've decided to use code splitting to reduce its size on initial page load. Should be easy. React has some very accessible documentation regarding their lazy component loading, which works well with WebPack code splitting.

They give a really trivial example:

import React, { Suspense } from 'react';

const OtherComponent = React.lazy(() => import('./OtherComponent'));

function MyComponent() {
  return (
    <div>
      <Suspense fallback={<div>Loading...</div>}>
        <OtherComponent />
      </Suspense>
    </div>
  );
}

Now, this approach works* out of the box, and the React team has been doing some excellent work exploring how the UI and UX around loading can be improved, so I'm very much excited by where this is going.

* But is painfully insufficient for production.

The main gotcha is that it can (and does) fail. The internet can be a fickle beast, especially with mobile connectivity. While the simple React example above does technically provide a mechanism to dynamically load your components, it doesn't fulfill the same contract as actually importing your code directly. Indeed not only is it possible that it can't load, but it's possible it might never load (maybe user has now gone out of service, server crashed, deployed a new version and they are unlucky trying to still get the old one).

These various complexities call for some attempted solutions.

Alerting the user to failure

Luckily the import() function returns a promise, so we could define a catch-all helper function to ensure we handle errors at least a bit.

export const errorLoading = err => {
  toastr.error(
    'An error has occured',
    'Please refresh the page. Otherwise, it will refresh automatically in 10 seconds.'
  );
  setTimeout(() => window.location.reload(), 10000);
};

So now the initial example would have have to add the catch-all:

const OtherComponent = React.lazy(() => import(./OtherComponent').catch( errorLoading));

That's already slightly better than an uncaught exception. Of course there are many approaches to the UX. You could for example use this to trigger an error boundary, or redirect to a server error page (some of the guidance from Google on SEO says that if you redirect to a server error code page, it helps them to understand).

However, it's still a bit too early to give up, surely there is a better way...

Retrying on failure to load

So one of the main things we could do, is to simply retry. Below is an example, where you can pass a function with configurable retries and delays before giving up – that itself returns a promise that will eventually resolve.

function delayPromise(promiseCreator, delayMs) {
  return new Promise(resolve => {
    setTimeout(resolve, delayMs);
  }).then(promiseCreator);
}

export function promiseRetry(promiseCreator, nTimes = 3, delayMs = 1500) {
  // Retries nTimes with a delay of delayMs on before subsequent retries
  return new Promise((resolve, reject) => {
    let promise = promiseCreator();
    for (let i = 1; i < nTimes; i++) {
      promise = promise.catch(() => delayPromise(promiseCreator, delayMs));
    }
    promise.catch(reject).then(resolve);
  });
}

So with the above our initial code would look more like this:

const OtherComponent = React.lazy(() => import('./OtherComponent').catch( errorLoading));

That's a vastly better experience than the initial implementation. I'm using similar code in production, and we've stopped getting errors like Loading chunk 2 failed;

Using with Redux

It's also possible to use this approach with Redux (assuming you have implemented lazy loadable reducers and sagas or whatever else you use:

const OtherReduxComponent = lazy(() =>
  promiseRetry(() => Promise.all([import('./OtherReduxComponent/reducer'),  import('./OtherReduxComponent/sagas'), import('./OtherReduxComponent')]))
    .then(([reducer, sagas, component]) => {
      injectReducer('otherReduxComponent', reducer.default);
      injectSagas('otherReduxComponent', sagas.rootSaga);
      return Promise.resolve(component);
    })
    .catch(errorLoading)
);

It would also be possible to retry each of the above promises individually instead of failing if one of them fails, but actually due to the fact that they are cached if successful, the difference won't save much bandwidth as they won't keep making network requests once they have succeeded.

Conclusion

So the above has explored how it could be possible to make react lazy loading more resilient, and improve the user experience while delivering a chunked react app. There is still a lot more that can be done – and I feel that React should probably invest some time into documenting in more detail how to do this, and perhaps provide a function that can do something like promiseRetry() above. Trying a network request a maximum of one time is always liable to fail, and by tying your app to large numbers of asynchronous calls, just to load the UI – you run the high risk of random failure. It's possible that you could have two versions of the app in the wild too during a deploy, and that case is particularly important to handle.

The best user experience could either be full reload now or: keep retrying, and it's not clear which approach is the one we should be optimizing for without measuring which situation occurs most frequently. Tuning the number of times to retry, and the delay between retries could be night and day between failing often, taking way longer than needed and finding the sweet spot. Crucially some situations exist when retrying will always fail – so there has to be a sane limit on retries.

There are also many users who hate code splitting – as it makes the browsing of sites slower (but SEO requiring the fastest possible initial page load makes this problem somewhat intractable with client side React). I have thoughts on this too, but to be honest, I'm unsure that a consensus exists on this subject – and I'll keep those to myself for now.

Ultimately for people with large web apps who wish to do code splitting – React lazy, suspense and the fallback UI possibilities have lead to a solid mechanism to split code, and control the points where the splitting occur, so that they are in sensible places. If only the provided mechanism was more robust it would be great, but in the current state (as far as I am aware) – that robustness is up to you, so thinking you can just swap to lazy() everywhere without problems is naive.

If you have thoughts, comments or feedback get in touch via info at sammorrowdrums.con I'm happy to correct anything I may have got wrong – or update the approach if there are better alternatives.

Ruffle Logo and Description

This year AOC went a little differently. I was thinking that after a year of not using Rust, I might like to try again and see if I could improve on last year, but after completing 8 days of problems I felt a bit less “Rusty”...

I had recently seen mention of Ruffle – a new runs-in-browser plugin-free Flash emulator – and the fact that Flash was dying in the new year, and thought I'd like to help the effort. I grew up with places like Newgrounds providing games and animated videos that were free to access, and things like Radiskull & Devil Doll, which frankly I'm still referencing today.

I cloned the repo, began looking through issues on Github and started hacking:

  • First to enable a desktop player file-picker so less technical users would be able to get up and running
  • Then an auto http –> https url conversion option (on by default) to stop CORS issues (and in some cases fix old links where http no longer works).
  • Lastly (for now), I added a warning on web and desktop for unsupported content, and the ability to show warnings for any reason (such as failed link opening)

Hopefully I'll continue to contribute to Ruffle, but I'm so grateful that my foray into Advent of Code lead to me becoming proficient enough at Rust to start making open source contributions.

I was also very excited to get my hands dirty with WASM, and discover the possibilities (and challenges) of building a Rust program for the web. I have found the tooling, the compile error messages and the linking with JavaScript easier than expected. Naturally there are some difficulties when it comes to types (especially as you cannot use Rust types in JS, and JS types are not very “rust”) – but certainly it enables kinds of processing that were limited to plugins only in the past (mostly due to speed) – and Flash has many parts like animating vector graphics, live interaction, and some embedded movies and things that make speed very critical.

As 2020 was for most of us a challenging year, I am glad to have finished it with something as optimistic as preserving the legacy online art created with Flash, and with that much of my childhood inspiration.

The Goal

I've been looking for nice ways to publish content that interact seamlessly between my local environment and web, I wanted a way to got off Medium, and ultimately to write with as little distraction as possible. Write.As provides a lot of positives for me, and is a lot easier than maintaining a custom blog.

I think the bottom line for me though, is that blogging in a web browser makes procrastination too easy, and my will is weak. To avoid this I have experimented a little with alternatives, and finally found a setup I actually like.

Trying to go native with Write.As

When I first tried to use write.as, they did have a native app, and CLI tool available on Elementary OS, and so I attempted compose markdown using their app, and then connect it to a personal blog...

Unfortunately at the time both tools could only support publishing as an anonymous, unknown user. Also, the native app did not (and still doesn't) support Markdown preview – or at least syntax highlighting.

The Solution

Fortunately the CLI now supports publishing to blogs:

writeas publish --font sans -b sammorrowdrums ~/Documents/Blog/blogging-with-quilter.md

This has enabled me to re-visit Quilter – a native app markdown editor (available on EOS) that features a simple distraction free UI and syntax highlighting of MD, and the ability to preview.

Blogging with Quilter

The Future

Finally I am able to blog, saving my files locally first, in a native app, and then publish easily when ready. I know there are plans to enhance the Write.As native app, so I'm not ruling out using it in the future, but at least I have a workflow that can keep me away from the web browser for now.

* I know that I could just copy and paste markdown into the Write.As website – I just don't want to.

* The —font sans is important, because the default is to upload the blog in monospace font, which is a poor choice for normal blogs, although useful for sharing code, and ascii art etc.

I had no plans to try Advent of Code this year, or to learn Rust, but I saw a thread on Hacker News and then I started AoC, and then I saw another comment a couple of days in:

After that there was no turning back, I was committed. I was going to do it all in Rust.

  • First things first, I found out how to install Rust.
  • Set up my text editor
  • Began going through their wonderful book (which I'm now about half way through).
  • Struggled through the first two problems (that I'd already solved in Python).
  • Fought the borrow checker, and began to learn the standard library and type system (efforts ongoing).
  • Started building out to modules.
  • Used their integrated testing tooling (which is great!).
  • Learned to output graphics onto the screen – to solve some of the challenges.

It's been almost two crazy weeks of late nights of panic!("{:?}", reason) and struggle, but considering the power of the language I've really grown to enjoy it.

There is excellent documentation, which comes alive as soon as you understand the type system well enough to read it. The compiler errors are great, and in particular things like match statements giving warnings if you don't cover all possibilities – really helpful. Using paradigms like Some(x) None rather than allowing null values, also all fantastic.

I think the only thing that takes a significant effort at first is to understand Lifetimes – how they handle the cleanup of memory without manual memory management (like C++) or garbage collection (like Python/Golang/JS). The payoff is huge in terms of fast execution, and memory safety but there are things that I just could not grok in a day. To really get the most out of the language you have to own it (or maybe borrow it... hmmm).

The Advent of code itself has been so cool. I have so much love for the creativity, puzzle creation and interesting references like Quines – and for problems that force me to relive my geometry days from school suddenly breaking out the atan2 function to convert vectors to degrees so I can destroy asteroids! There are plenty of hilarious video links and references mixed into the story they weave with each puzzle too.

If you are thinking about using AoC to learn a language – you should! I cannot pretend that it's been easy for me, but it's been great fun. I would certainly consider doing it again and I hope to use a lot more Rust in 2020!

p.s. You should ask me how proud I am of my little IntCode VM. Everyone should have one.

elementary OS Juno

– update -

I have since bought a new Dell XPS Developer Edition laptop and installed elementary OS Juno on it straight away, and they keep making gradual improvements which is great. Still a powerful combo, still happy and would recommend this setup to anyone interested. I use it for everything.

Early Days

I wrote previously about leaving MacOS for elementary OS Loki and again one week in, so I won't repeat myself but tl;dr is that I'm a software developer, I am comfortable with the command line, originally I liked my Mac because it felt like a machine for both work and play, I wanted a to find a Linux machine that felt good for both. Elementary OS provided a beautiful environment, good opinionated defaults, HiDPI support and with a bit of tinkering I had all the latest tools for software development too.

So far I have been able to solve any problems I have faced, and helped others to do the same. I've swapped my kernel for the latest, installed packages from external sources and even with all of that, I've had very few updates that ever required manual intervention. I even get firmware updates from Dell which have been great. There are some software non-options (like Adobe, who could easily release for Linux, but don't), so you do have to be open to trying other tools:

  • For raw photos Dartktable works well as an alternative to Lightroom
  • I payed for Bitwig Studio for music production – it's made by some ex-Ableton guys – and their Linux version is great. It's not cheap and there are plenty of high quality Open Source tools such as Ardour that I would also recommend – which is available via AppCenter interface.

You don't have to be technical to use elementary OS, it's simple and effective by design – you might be surprised that you can't minimise apps, and you can't store files on the desktop.

I couldn't agree more with those decisions, but may take some people time to appreciate it.

For users that are new to Linux, if you pick a computer with good hardware support, stick to apps available within the AppCenter (and maybe brave the odd extra .deb file install – if you require additional software), I'd be surprised if there were any real issues. Stability is also something that keeps improving and Linux continues to improve stability and hardware support, it's a different world now.

Evolution of elementary OS

I've seen elementary OS go through some big changes in my time using it. They have for example:

  • Launched a successful crowd-funder for their AppCentre to help fund native app development
  • Encouraged an ecosystem of creators to build apps for ElementaryOS AppCentre
  • Released the new Juno version of the OS
  • Move across to Github to encourage more community contributions

The rest of the improvements have been more subtle. Elemetary provides a very consistent experience, and most of their forward progression is an attempt to make that experience smoother, more accessible, more discoverable or otherwise build on what they've achieved. I think this is often undervalued (and misunderstood) by developers, and hard to get right too. Things like ensuring icons are aligned evenly on a pixel grid so that they scale smoothly is not everyone's cup of tea – but it's these little details that are there in spades.

The Upgrade

I followed this guide to upgrade to the new Juno release, as I did not wish to re-install everything, and currently there is no official upgrade path between Elementary OS releases. I had manually installed a couple of packages that had conflicts with system packages (such as an HP printer tool and some packages related to kernel upgrades), and I needed to fix with them before I could complete the installation, but it was relatively painless considering it's not a supported method.

Impressions of Juno

After booting up, they grabbed my attention with the default wallpaper (yet again). My screen really came alive, I appreciated my monitor again, and it reinforced how much I trust the Elementary OS team to get aesthetics right. While trivial, a good start does improve the experience.

Keyboard Shortcut Cheat Sheat

  • The interface and boot time are very snappy
  • With the super key, you can see the list of system shortcuts which I really hope will help new users with feature discovery
  • Apps can't put their icons in the system tray
  • There is a picture-in-picture mode to see part of another screen
  • Still no minimise (switch workspaces instead – faster and simple)
  • Still cannot put icons on the desktop (again – this was always an anti-pattern)

I wrote this in Quilter, and published it with Write.As (however at time of writing there is no way to publish to a personal account with the Write.As App), so I've had to paste it into the interface, but I like writing in Markdown and distraction free editor is great.

Using Quilter

Picture-in-picture rocks

There are no apps in the system tray. If you think some apps always do need an icon in the system tray, I'd ask why? I haven't missed a single one, Slack, for example , the little red dot of distraction is gone. System notifications act as a single source of interruption, and you can control them.

Using a timer with picture-in-picture

If you want to watch an app, there is a keyboard shortcut for picture-in-picture, that allows you to select an area of another screen and see it in a small box on the one you are using. I use this feature to code while watching a timer – and it's really helpful.

AppCenter

The list of categories and curated apps certainly helps to discover some new things, a good example is Vocal which is a simple native Podcast app that just works. Cassidy (one of the eOS team) created a native colour picker app which I use frequently because no matter what software I'm using to view something, sometimes I need to grab a colour for CSS work etc. Little native apps like this help productivity.

AppCenter

I would suggest having a look around the AppCenter yourself, everyone has different interests and needs.

Conclusion

The quality of elementary has improved gradually, and (for me at least) they seem to be able to improve in a clear direction that enhances the experience – things like improved icon sharpness and interface contrast too – they care about details and about providing a consistent experience. Also more visible things like the keyboard shortcuts cheat sheet – they want users to be enabled, and the delicate balance of beauty, speed, accessibility, UX, simplicity, good opinionated decisions.

I do see plenty of Linux user comments about the inflexibility of the UI, as people are used to doing things certain ways, and don't see why they should change, but to them I can only say this:

Some parts of operating system user experience and UI that have evolved and stuck around were mistakes. Breaking habits is hard sometimes, but that is not an excuse to build worse software. elementary OS doesn't let you do some things because they lead people into worse situations in the long run. It's never going to be perfect and please everyone, but it has a style, and that style is predictable and smart. Love it or hate it, it's consistent and simple.

I'm going to stick with it.

Enter your email to subscribe to updates.