SamMorrowDrums Blog

Drummer, software engineer and online-learning fanatic.

Community is without a shadow of a doubt my favourite sitcom in years, like all the greats it’s not without poor episodes (or even seasons), probably lasted too many seasons, and yet you still are left wanting more and more. (And apparently a movie is finally on the way).

Troy and Abed share a hug

For me the course of the show itself exemplifies the theme that it goes to a lot of lengths to explore throughout – the only inevitable thing in life is chang. We all have to grow up some time, and it’ll suck but at the same time avoiding it sucks even harder. Indeed the anticlimax that is Troy’s 21st birthday literally hits you in the face with that. Then the disappointment as beloved characters leave ultimately concluding in a spiral towards the inevitability of death – of the show, of the idea you could stay in Greendale forever and ultimately of yourself. And yet it’s not all depressing. The growth, the trials, the love, friendship, acceptance and community and the adventures along the way are all part of the picture, and this makes the show tread a line between comedy and deep existential crisis. The thing even more consistent than the laughs and love is the deep underlying existential crisis of your own fleeting existence.

Nothing exemplifies this complex mix of emotions better than Greendale is Where I Belong by composer Ludwig Görannson the piece of interstitial music that didn’t even make it to the official soundtrack, and yet is enough to bring any true lover of Community to tears in seconds. Somehow it captures everything that the series makes you feel. Sentimental, sad, happy and it’s like a big musical hug at the same time. I have no idea if it evokes that response in non-fans but judging by the amazing (and unusually lovely) comments on the extended version on YouTube, it’s certainly clear I’m not alone.

I suppose in way, it’s similar to the feeling of putting down a much-loved book – but it’s exacerbated by the fact the book has been trying to tell you all along that at some point you’re going to have to put it down and move on with life, and that no matter how hard you try, you will never be able to go back, and if you do it’ll never be the same.

The show is so good at reminding us to accept everyone’s flaws that it teaches us to accept its own too, and it deserves so much credit for getting so many people through the pandemic, and through their lowest ebbs. Somehow it doesn’t try to make you feel better, and it understands how you feel too.

My other favourite thing is the constant and enjoyably self-indulgent cinema references and Abed’s self-awareness that he’s in a show even though he isn’t, in the show (what fourth wall?)… it’s just wall to wall meta jokes and I’m always here for that.

Only one thing is true for certain. Dan Harmon is one messed up dude.

Many people have brought out the popcorn to watch the Musk-driven Bird Site meltdown. A fun side-effect is watching those around us who are not usually interested in experimenting with new social media services or messaging apps suddenly show some interest in Mastodon.

What server should I join?

For most, federated social media is such an alien concept, and the idea of a service that is not only federated but largely volunteer lead and without an official app is downright scary. In light of this (and to tickle my own curiosity), I decided to host my own Mastodon instance and suggest my family join that server. I also wanted to create a second one that was more for friends.

I looked around for advice and a starting point, and after deciding that hosted databases and things were beyond my budget, I found the docker definitions inside the Mastodon repo. These let you self-host all the required services on a single machine. This is not the best production setup you can achieve, but it's an easy launching-off point and makes it straightforward to migrate servers/infrastructure at a later stage, should that be necessary.

I also stumbled upon this excellent set of instructions for setting up the docker version, which I highly recommend reviewing.

Mastdon instance screenshot

For those interested, the basic tasks were:

  • buy the domains
  • do some basic setup on a fresh cloud server
  • point the domain DNS records at the server
  • add a data disk, and setup docker to store files there
  • edit the data volumes folders in the docker-compose setup to go on the data disk
  • add a password to the Postgres DB container
  • set up the configuration file
  • make another configuration and another docker-compose.yml with different exposed ports (as I was hosting a second instance on the same server), and you can rename the services if they live on the same internal docker network or isolate them and you won't have to do that.
  • set up an email server (a hosted service is easier and less likely to go to spam than anything you'd do yourself).
  • get the services running
  • setup nginx to proxy the traffic to the right ports on the machine
  • setup TLS with certbot from Lets Encrypt

At this point, things should be up and running.

The biggest hurdle I encountered was wanting to use my Proton Mail email service, which, for encryption reasons, requires Proton Bridge to send mail via SMTP, and also is only listening to localhost in the default configuration. Inside my secure network, I had the problem that the Docker services could not talk to services on localhost on the host machine. Ultimately, I had to bind to a different network interface by editing the constant in the Proton Bridge source code and building it from source. This is not ideal, but other tricks (like forwarding traffic with socat or something) would not have been any better.

I also found some helpful instructions for setting up a service to keep Proton Bridge running.

It ended in success, and I now have two instances that are federating and able to send sign-up confirmation emails and things, and I have new Mastodon moderator/admin responsibilities as well as sysadmin responsibilities to keep everything up-to-date.

Do I recommend this approach? Maybe. If you're able to understand the linked instructions (and the implications of the decisions contained within them) and edit them comfortably to customise the setup, then it's a fun thing to do, but otherwise I'd suggest finding another server or paying for a hosted instance if you can find a willing host.

Keeping your instance safe both by ensuring you're not federating content that you are uncomfortable with/or that is illegal where you live and keeping your users and your hosting account safe are real burdens. There are potential legal ramifications to being a service provider, even as a volunteer hosting a free service for fun.

Either way, it's kind of cool to share a private server with people you know, if nothing else, just because you can!

I have been meaning to write a review of these two fascinating e-writer devices since I received them both at the end of last year, but I kept putting it off because I never felt sure how to conclude such a review.

The tl;dr of it all is that the RM2 is kind of like the Apple of the e-writer world – simple (too much so for many), slick and very much built to a vision. You trade flexibility and power for form factor and a very custom UX, and you do get to join their pretty cool cult of devotees. I think I prefer it.

The Lumi on the other hand, is big, bold and backlit (actually front-lit, but that doesn't alliterate). You get an 8 core, full-Android running device that is the size of an A4 page, you can read in dark environments (and change the colour temperature to an orange night light), you can open DRM protected e-books via apps, use it as an external e-ink monitor and even watch YouTube if you really want...

I am still using both devices, and occasionally wanting to read a book I cannot get DRM-Free (yes I know stripping it is possible), or wanting to use the device in the dark does mean that, as I am lucky enough to own both, I probably will keep using them both.

I also bought my wife an RM2 – and she uses it happily for note taking and reading – but I do obtain e-books for it on her behalf, as the whole DRM-free thing is not something she's particularly understanding about, she previously used a Kindle so it is just frustrating. I think e-book DRM is not something a lot of end users will appreciate learning about when they can't read a book they want to (even though I hate DRM with a passion – and would rather pay extra to have DRM-free if given the choice). The fact usage as an e-reader is considered secondary to the writer part does not really improve this, who is going to carry multiple e-ink devices around!?

The writing experience

Both devices are primarily built to replace paper in every workflow from printing, reading, signing to drawing and journaling – and as such have some texture on the screens and come with a stylus.

Then pens / stylii / styluses

Firstly, both devices have passive styluses, which is great – you never need to charge them, they don't need to be docked periodically, they don't ever have Bluetooth issues or interference (well, magnets might mess with them, but they'd mess with the devices as well). This is really a must for e-writers.

With the RM2, I payed extra for the Marker Plus (which does seem a bit of a big jump in cost just for an on-pen eraser), but I generally like it well and I have not felt compelled to replace it. The writing experience with the 'marker' feels good to me, and I think they have done well at emulating the experience of using a pencil. A pen end-eraser rather than a push button is a great example of how ReMarkable view the world – they want it to feel like using a real pencil, even though the push button is more ergonomic as you don't need to turn the stylus upside down to erase, they make you do that anyway. The default erase mode is also to rub out only the parts your stylus eraser touches (rather than whole lines). The aesthetics of the written content look good, but can be a bit rough around the edges as there isn't any line smoothing – which the Lumi does. This does mean that you get ultra-low latency, and the writing doesn't slightly change shortly after you stop writing, but I'm 50/50 on whether that is a good thing I like the smooth look of writing on the Lumi.

The stock pen on the Onyx Max Lumi is so bad that I replaced it immediately with the Lamy Al-Star, which I have found perfect for my use. For such an expensive device, they really should have provided a more premium stylus. The Lumi default eraser mode (controlled by button press) is a line delete mode, which while less natural is also arguably more useful – as it saves you having to rub your marker over a potentially large area, just cross a line with the button depressed, and the whole line goes. Both devices support different modes for erasing though, I just find the defaults give an insight into the designers mindset. The screen texture is a little smoother than the ReMarkable but still feels good to me, it has some resistance unlike an iPad, and I did not apply the stock screen protector – as I value getting the pen as close to the screen as possible.

My writing is not very neat, and I find that for me the ReMarkable calligraphy pen mode is the most forgiving (which surprised me), and I feel it also makes my writing look somewhat like a pirate wrote it – which I enjoy. The Lumi also has a calligraphy type pen mode, but it is not as refined. The more neat your hand the more you will be able to happily write and draw with both devices and both have several pen modes, and likely you will find a good daily driver in one of the settings.

Colour is controversial – because (again) the minimalists/purists at ReMarkable don't think that the device should be capable of drawing virtual colours – so all your writing and drawing is grayscale, including when you mark up documents – there is a highlighter pen that shows as a sort of yellow on exported documents, but for example you cannot use red and green to mark / grade papers on it. The Lumi on the other hand lets you choose from a few base colours with all pens – and I don't find it too much of a stretch of the imagination to use them when I want to. Then again, I usually just use black like 99% of the time.

Reading

The RM2 has just this week begun receiving pinch zoom in the official update (which they role out slowly and randomly so my wife has it already, but I don't). So now PDF reading is a bit easier when reviewing larger format documents. The screen size and form factor are almost always a plus until you are reading scientific or very text-dense PDFs. The previous method was to use the clunky sidebar zoom buttons which was a significant of complaint for many users. That wasn't helped by the fact the RM2 sidebar overlays the text, so when margins are too small (or you draw in the whole content area) the sidebar overlays it. The Lumi by contrast resizes content on the screen if you bring up the sidebar (which has occasionally caused issues for me when embedding images in drawings as they would be slightly off after switching between sidebar and none).

The other feature of the recent update is slightly improved e-book rendering, and that has meant better respect of section markings, indents and importantly hyperlinks inside documents are now at least clickable.

The Lumi has always had richer reading support because even if the native reader lacked some features, there was always Kindle, Kobo apps etc. and with its faster processing it is much better when you want to change settings. On the ReMarkable my wife wished to change font of an e-book, and the tablet froze, and I ended up deleting and re-adding the book and not risking changing the font again. It is also quite slow at changing font size etc. Luckily if you read mostly novels, you only do that once and that has not been a significant hurdle for me. Maybe since the update I'll give the RM2 another go at font changing.

The A4 size of the Lumi is a blessing and a curse, as you can read books two pages side by side like a real book, and it has split screen and things, and large PDFs are usually at their native print size. But as a result it is a bit heavy (especially in its case) – so while I do frequently read in bed with it, it is a bit heavy after a while.

A feature that I really wish I could have is syncing book position between both devices via Calibre (which I use to manage my ebook library). Perhaps one day that will occur. The RM2 doesn't have a Calibre plugin however I was excited to discover that somebody posted a solution yesterday, while the Lumi is supported so you can plug in via USB and it is easy to sync books to the device via that way.

Apps

The RM2 has a desktop app (for mac and windows), as I use Linux that leaves me with the Android App (and my wife uses the iPhone App) – and the apps are simple and mature, but are not optimised for bulk actions so power users will get frustrated. Luckily – the RM2 being a Linux device itself means I can actually SSH into it, and tools like rsync and third party software can easily interface with it too.

The Lumi has phone apps, and cloud website that you can use – and they are all OK, not quite as smooth as the RM2 but then potentially if you are using other languages they might be better – the Lumi is Chinese built and has first class support for Chinese I believe – and you can choose Chinese or US servers with the cloud features.

You can send web articles to the Lumi via their cloud page, or via a third party Print to Boox chrome plugin. You can send articles to the RM2 via their Send To ReMarkable chrome plugin. I have found both devices are quite good at this, and most of the time I have been happy with the results.

As part of the cloud features both devices will attempt to convert your handwriting to text, but to be honest for me that's been a gimmick. I did play with it, but usually I find that refining and re-drafting are more useful to me than a conversion of my messy handwriting that is always full off frequent errors (your mileage may vary).

Battery

The battery life of the Lumi is hands down the winner, and you could measure it in weeks. The RM2 is probably closer to 1 to two weeks max, but it is still acceptable to me – and I haven't done things to help like toggling WiFi because to be honest, I prefer all features to just work including ones that require the internet.

Distraction free

The RM2 was sold specifically as a device without distraction, and it excels at this with its often deliberate limitations – it isn't intended as a general purpose device, even though it is ultimately a Linux tablet. No notifications, no apps etc.

The Lumi can do loads, can play sound and can even read to you via text to speech so while that is a bonus for accessibility, and might be nice for audio-books – the first thing I did was to turn it into do-not-disturb mode. Even if I install gmail on it – I don't want it to tell me. I bought it for writing, reading and thinking, and with a few settings changes, you can make it pretty close to the RM2 experience. I have not found it's open ended possibilities distracting, and I use basically the same features on both devices the whole time (reading, writing).

Hackability

The RM2 is arguable the winner here, as there are several major projects like Remarkable Hacks, it runs Linux so you can modify many parts of the system if you are brave enough, and somebody even worked on an alternative open source OS Parabola-rM which frankly I would not recommend most users try, but it could give these devices life beyond the stock system, should the company ever go bust for example.

The Lumi is Android, and I haven't tried but I believe you can use adb to sideload and develop Android apps with it, just like any other Android device. This gives a very well trodden way to produce your own software, and while there are many who would not touch Onyx for GPL violations, there is at least a clear path for app developers to target the device. I am not sure how possible it is to optimise apps for the e-ink UX, but I am tempted to try at some point.

The ReMarkable requires you to jump through hoops and install custom launchers and things to be able to open apps on it – so it's simultaneously inflexible and yet still hackable.

Conclusion

I really feel that as a daily driver, the take-it-or-leave-it, no-frills but also very aesthetically pleasing RM2 is what I will continue to use, I keep it beside my desk, plan my day with it, and generally use it for writing and reading. When traveling it is smaller too. The downside is that without the front-light things like reading on an aeroplane can be a disappointing struggle.

Given the above, I am somewhat confused as to where the Lumi fits in in my life. I still enjoy the large screen for doodling, solving puzzles, reading in the dark, reading PDFs and I find the screen very pleasing to look at. The fact is I use it probably daily when I'm reading a book on it (even if I write notes on the RM2 more generally), and then sometimes weeks go by and I won't touch it. I am expecting to hit an issue of wanting to read an e-book DRM, and so I like the fact the device is there. I might even re-try using it as a daily driver in the future to see if I still feel the same, but then again, I probably won't.

For now, the RM2 is probably my main suggestion if I were to have to choose, and especially if there is someone very technical in the household who would be willing to tweak the system etc. For people who want to replace physical notebooks specifically, this is the best option.

Onyx Boox Max devices are possibly a better bet for less technical users or those who specifically like reading a lot, and the Lumi with its front-light offers a very compelling (if a bit too big) package that I certainly would happily recommend if that's a big plus for you (for example I like reading in bed after my wife has gone to sleep).

As mentioned at the start though, I have been struggling to draw this review out of myself, because there are so many factors, and the occasional update (both devices receive them probably once ever 2-3 months so far) can make a massive difference, so I have been curious to wait and see if anything solidified and clarified my thoughts, but in the end I'm somewhat conflicted. If I had either device on its own, I'd have loved it for what it was, and lived with what it lacked.

If you like tech, taking notes, e-ink, e-books and thinking peacefully while planning, you will love e-writers. If any of the device's issues (which I suggest you research well before purchase) are deal-breakers for you, then that might help you make up your mind, but in the end it's really great to have such diverse e-ink devices available now, with their impressive latency and interesting applications. I am very happy I got on board. I do use at least one of them every day.

Having recently decided that I was fed up with seeing ads and paywalls on countless websites every time I open links (on news sites in particular), while still wanting content producers to get payed, I felt I needed to look into options for micropayments and see if they are any good. I simply won't subscribe to sites for which I am only a casual reader, so I needed another option.

Scroll payment breakdown

There are actually now several micropayment providers, and Mozilla seem to be very motivated to help push the field forwards1, 2. Interestingly JavaScript pioneer and Mozilla co-founder Brendan Eich, is also one of the pioneers of micropayment systems with Brave web browser and their Basic Attention Token system. Unfortunately, I ruled out Brave early on because:

So that left me mostly with the two services Mozilla is working with, Coil and Scroll. Here is a basic breakdown:

  • Coil
    • $5/month
    • Works with browser plugins
    • Pays out continuously based on time spent on site / watching content
    • On mobile requires Puma browser (due to lack of plugin integration in mobile browsers), which is basically Firefox Mobile + Coil support
    • Uses open interledger protocol:
      • To monetize your own content, just add an HTML tag (or if doing conditional ads, you can do a small bit of JS to check).
      • Paying out goes to any wallet provider that supports the protocol, I use Uphold for my blog, which can pay out in Fiat currency
      • Other providers than coil can emerge and use the same protocol
    • Enables various other kinds of Monetization, so can be used with Twitch, Imgur, Newgrounds and all kinds of other services.
    • Coil doesn't have to pre-approve monetized content providers
  • Scroll
    • $5/month
    • Works with browser cookie as far as I can tell
    • Pays out to its partners as a split share of your subscription fee
    • Works on basically all browsers (but you might not notice if you get signed out at first, from clearing cache for example)
    • Does deals with specific content providers, such as The Atlantic and Bloomberg.
    • Centralised payment system, officially only available in USA currently

Using Coil

Coil Homepage

Coil was very easy to set up. Just sign-up, grab the browser extensions and go. Then you can see a little green dollar sign when you visit pages that take payments. Unfortunately we don't have the ability (yet) to add the extension to mobile browsers, so in order to benefit from Coil on mobile I needed to install Puma Browser (effectively Firefox Mobile). Which, after a while of delaying I eventually decided to do. My one gripe was that neither LastPass nor 1Password currently recognize it as a browser, so it couldn't fully replace my mobile browser (yet), as they would offer to add passwords for the entire app rather than just the page you were on – I did flag this with 1Password.

I am very happy with the service Coil provides (including their curated list of monetized sites). I find that the broad range of independent publishers it supports makes it an incredibly viable option. The open protocol in particular should enable future iterations, competitors and support from more financial institutions over time, and it seems like its biggest perk is it is decentralized, and so I was able to experiment with the other side of the coin with ease.

With my blogging provider Write.As, I was able to instantly try out monetizing my own content. I just needed to add my “interledger payment pointer”, which I obtained via Uphold to automatically pay out in Euros.

Now, before getting excited, my low volume blog does not make any real money. Here's how it's gone so far:

My uphold wallet balance

But, I am hopeful should I write anything of consequence and should more people start using Coil, I would end up getting payed a much greater amount. I'd be interested to see how the more commercial partners are doing.

Scroll

Scroll Homepage

So after my positive experience with Coil, I decided to give Scroll a go. It is interesting, because while it doesn't have the open tech appeal of Coil, it does remove ads and has some serious premium content providers signed up to it, although sadly some of them still have article limits before you would still have to subscribe.

Coil uses magic SSO links via their interface and to your email, so it is really simple to get going. Click on the links in your browsers of choice, and you are already set up. They also show a breakdown of which partners are getting a cut of your subscription (and how much of it they get).

Scroll payment breakdown

You can tell that it is working on participating sites because you get a small S symbol on-screen, which you can open to the Scroll toolbar, which enables you to do things such as share ad free links, and also it gives visual confirmation that you are indeed signed in.

My only problem (other than questions about the centralized model), is that Scroll gets logged out when you clear your browser cache etc. and I don't know what their plan is (and if it will continue to work the same way once browsers crack down further on cookies). So I have found myself logged out a few times, and then need to input my email and get a link to back log in again. It's only a mild annoyance, but it is true that it has been more awkward to use in the long run compared to Coil (which was more a set up once, work forever kind of thing).

Conclusion

I think the world of micropayment funded content is finally beginning to mature, and “experiments” in the field are getting more and more thought out, and I expect to see these services continue to expand significantly over the next few years. I cannot predict if the day of micropayments is here, and I know that globally $5/month to two main providers is cheap in some places, and unimaginable in others, but the nice part with Coil, is that people in places that are less wealthy can easily get set up to monetize content, and hopefully that will democratize media a little bit more – while also providing alternatives to established ad-based media.

There are also further questions to be asked, like the implications of society being desensitized to automatically paying out small sums of money all the time, and potential for accidentally paying money to people you wouldn't want to (even if there were ads doing it before anyway). It is also interesting what impact this has on privacy / tracking – and micropayment providers knowing what content you are looking at. I certainly hope that none of these businesses evolve to monetize your user data, as that is exactly the opposite of what I'm paying them to do. I also wonder about children and people unable to pay, and if we are just locking them out of ad-free experiences – which I would consider to be a dark outcome. Will it also make ad-funded stuff less viable – as more places stop relying on ads and use this instead? I certainly hope so.

I am very happy with my micropayment experience so far, and I intend to continue to use both Scroll and Coil for at least the next year, and I definitely feel a bit better not seeing ads this way, rather than simply applying an ad-blocker with no alternative funding for content producers. If you are willing to, I'd recommend giving it a try for yourself and experiencing the warm feeling of giving something back when reading something you have enjoyed, without the cursed tracking cookies and ads slowing loading and interrupting your experience.

So, I decided to bolster my security this year, and invest in a YubiKey. One of its less obvious features is OpenPGP support; this means as well as U2F and TOPT authentication, you can use it for all kinds of cryptographic possibilities. Those familiar with SmartCards will probably already be familiar with what they can do, and that is the important point. The YubiKey is also an OpenPGP Smartcard.

TL;DR

If you're time poor, you might not have time to set up PGP properly, so as a first step consider using Signal and ProtonMail (they are both secure and free). At least then you can be confident you are set up to communicate via encrypted channels.

Your motivation to set up PGP keys will most likely depend on need such as being a journalist, wanting to sign your code or perhaps you are just a curious geek. It seems today few people are using PGP encrypted e-mail, and of those that are, probably most are using services like ProtonMail which is a good step, but doesn't let you use your own master key without uploading it*, which complicates things if you require the highest level of security.

* It will be stored encrypted with your password though. So they never have access to it. Keeping your certification key off the internet entirely is possibly overkill for most users.

That being said – I think if you are willing to try, the more private communications there are in the wild, the less they stick out – so you help to mitigate mass surveillance operations, and you have secure channels of communication ready, should the need ever arise (which often becomes much more difficult to do safely / legally, if that day comes).

The Basics

PGP allows you to sign, encrypt and authenticate (and certify) basically anything. This covers numerous activities such as:

  • encrypting / decrypting messages such as emails
  • signing git commits & software releases
  • authenticating via SSH
  • proving that all of the above were done by you

To get the full benefit of PGP, you need to keep your master key safe – as if that is compromised, you lose nearly all the benefits of using it in the first place. Obviously the greater your personal need for operations security, the higher the risks of somebody targeting your key. The PGP system was designed to easily protect your information, even from threats like nation states, so much of the advice you will see on PGP is uncompromising – and is focused on how to maximise security.

In practice, few of us are in need of Edward Snowden levels of security – although if your opsec is strong (especially for journalists), you might be more likely to have a high security source trust you with their leaks or tips.

Generating a Master Key

Already we are faced here with options, and I think this is what puts off most people right from the start. You really need to use the command line, decide what key length (and algorithm) you are going to use, and decide where you are going to generate your key.

I chose to generate a 4096-bit RSA key.

The main options are:

  • Just using your current machine
    • This is the easiest, but is also the least secure, as a compromised machine could leak your key to the world
  • Directly on SmartCard / YubiKey
    • This is simple and secure (as long as nobody can hack your key)
    • Even you cannot remove the master key from your YubiKey, and risk losing access
  • Using an offline system
    • This involves having a live operating system from something like a USB drive, and while you might upgrade stuff using the internet, or install tools to manage your YubiKey, you pull the internet cable before generating your key, and you shut down the machine without internet when complete.
  • Offline system in an air-gapped room
    • This is the Snowden level option, if you think that you are being targeted by the most advanced capabilities, then you would literally need to scrub a room (that functions as a faraday cage) of all electronics, and probably better using a single write medium with no electronic components like a DVD-R for the live operating system, and you would need to leave your phone out of this room, and then you should probably destroy and scrub everything after generating the key, and even then you would need to be careful to encrypt your messages in a similarly safe environment if possible, because frankly at this point people are potentially trying to hack you, add keyloggers, and find easier ways of breaking your encryption than actually having to decrypt your messages. Which you will never 100% guarantee they won't be able to do, as the three letter agency types of the world would not tell you if they could – although the strongest available encryption is the best you've got, and it probably will work.
    • I mostly add this option as a joke, because obviously organisations like the NSA and GCHQ have this sort of practice baked into their offices along with all the physical security etc. so even if you were to try this at home, without the resources to manage this level of security – you would probably fall down somewhere, but we can all dream right!?

I opted for the offline system, and ran into a few problems, such as my laptops WiFi card not being supported by the stock Debian live USB, and my other machine (which has cabled LAN) didn't have USB-C ports (which my YubiKey does). In the end I bought a USB-C (f) –> USB A adapter and used the LAN. This worked well, and I was able to get the packages I needed, pull the LAN cable, and then generate my key following these instructions.

The important thing is that the instructions are really precise and often if you get a step wrong, you will need to start from scratch.

I then loaded up my laptop and (having made a USB key with an encrypted partition with private keys and revocation certificate, and an un-encrypted partition with the public key, as per the guide), tried to load my public key onto my machine, so I could start using my YubiKey – which had the signing, encryption and authentication keys loaded onto it now. Sadly my public key partition didn't mount and I had already shut-down the offline system so I had to load the live disk back up install some packages again and pull the LAN cable, and then load up my encrypted partition (which did mount), and restore my keys – this is the only copy of the master key, emergency revocation key and public key – so losing that would have meant starting from scratch. Once I had fixed my issues with the backup USB key, I was able to restore my public key to my laptop, and continue.

Backing it up

Now, I had so far neglected to make any other backup of my keys other than the encrypted USB stick, and bit-rot is a genuine risk, so I also wanted to make physical backups – and so I used a tool paperkey to print them using my live USB, and restoring keys from the encrypted USB, and obviously this time I also needed to install printer drivers and things. Technically printers are a risk too, so for the ultra-paranoid you might want to either not have a backup and risk getting locked out of your own keys, or find some more reliable medium. For convenience I would have backed up my master key with a QR code tool, however the 4096 bit RSA key I generated was too big to QR – so for that I would have OCR scan it to revive the key.

Using the key

The first thing I did was submit my public key to a few keyservers which enables other users to trivially discover my public key, which is all they need to send me encrypted messages – and importantly means if I for some reason lose it, I can download it again myself. I then followed some examples of encrypting messages to myself, and decrypting them on the command line – just to try it. That all worked well.

I was initially excited to try and use my keys with ProtonMail, which has full PGP support and is my current e-mail provider, only to discover that it would require that I upload my private key the way it currently works – so I just had to leave it in a situation where I have different public keys for my email accounts in ProtonMail, and I don't really want to leave the service – but I was frustrated that I couldn't use it.

I then also got excited about Social Proofs (which is a mixture between actually verifying ownership of various accounts, and stamp collecting). I tried keybase.io – it worked, and while it did ask me to optionally upload my private key, it was able to do all the “proofs”, with me signing the actions on my local machine. The purchase of Keybase by Zoom however put me off, and various practices like mounting drives and adding stuff to startup made me remove KeyBase entirely from my system – unfortunately they just don't seem to have managed to maintain trust, respect and sufficient usage for their (originally noble goals) to pan out in the long run. I did like their idea though.

Next I tried a distributed social proof tool called KeyOxide – here are my proofs (should you wish to contact me). KeyOxide is less automated than Keybase, doesn't have file sharing and private messaging, but it also doesn't need you to create an account or upload anything, doesn't need apps, and its proof system works well.

Of course, social proofs and identities matched to your cryptographic identity are not for people flying under the radar – but it is a mistake that cryptography is only for those people. I'm happy that I have a way to prove that I am who I say I am online, and therefore disavow any attempts to impersonate me.

Setting up commit signing

Signing commits is pretty easy, and simply requires that you point git to a public key file, and tell it to sign all commits in the config. Then you'll need your YubiKey to commit anything, which will also prompt you for a pin unlock the first time each session (if you don't want that, pull the YubiKey after each commit). Then you just need to upload your commit on Github, and hey-presto, all your commits show as signed and verified.

SSH

So now you have a YubiKey set up with PGP keys, and therefore you can use it for SSH. You will need to set up gpg-agent, and do a little configuration, but then you can view your public key ssh-add -L and the YubiKey entry which will be suffixed with something like cardno:000123456789 – add that to the usual places (~/.ssh/authorized_keys on servers and in your Github account), and then you can start using SSH on any machine with gpg-agent simply by carrying around your YubiKey. This saves you from having a private keyfile stored on your machine, reducing your risk of the key being compromised.

Summing up

This was a lot of work, and transferring all my 2FA to YubiKeys was already quite a lot of effort, but I really wanted to see if it has gotten easier to to use PGP, and I have found that while the technical instructions are increasingly good, and the hardware keys work very reliably – it is still just too much for the average person. I got my family to move to using Signal, as the combination of e2e encryption, metadata encryption and open source gives me a significant degree of trust for our usage, and the barrier of entry is dramatically lower. Of course, you then are using a centralised service on a smart phone, which means that you are not necessarily safe from law enforcement or nation states who could probably gain access to your phone through other means such as known exploits, if they had reason to – but you are also safer from bulk data collection and metadata collection.

I personally will continue to use my offline key generated subkeys to connect to SSH, and sign commits – and I'm happy to receive encrypted messages to my public keys (including my ProtonMail ones which I attach to all my outgoing emails) but, for me this was still more of an experiment to see what it is like to set up, and one that I am uneasy about recommending to anyone who is not very technical or doesn't have a deep personal need. I am glad that e2e encrypted apps at least have gained significant traction, and https everywhere is a major success. We are at least beginning to see widespread public adoption and usage of encryption technologies – but SmartCard stuff is simply not going to be adopted by people who don't feel that it is essential for them, and genuinely secure key generation is likely seen as tinfoil hat stuff, by the majority of the public.

As mentioned above all you need to reach me is available here, and I'm happy to receive your attempts at sending secure messages. The Free Software Foundation also has an excellent guide to PGP e-mail, which includes an e-mail bot edward-en@fsf.org that will help you to troubleshoot any problems you might face!

TL;DR

Do you need one? I'd suggest you get two, probably better than dealing with identity theft or phishing.

Are they easy to use? Yes, incredibly easy for services that support the simple button press / NFC touch auth (U2F authentication) and allow you to register multiple keys like Facebook and Gmail, for example. There are other auth methods available, but they do have additional steps.

How do they work? They fulfil the 'something you have' criteria of keeping your accounts safe, in a secure enough way that it is likely near-impossible for somebody to impersonate you.

Which one should I get? Two YubiKey 5s with NFC and USB-C (if you have USB-C laptops / computers and phone that supports NFC which most Android and iPhone models now do), otherwise possibly the regular USB/NFC model.

Are they worth the money? I think you'd have to ask yourself that after somebody hacked your accounts, because they are quite expensive, but you will probably avoid getting all your online identity hacked.

My Motivation

Scared that if I lose access to the Google Authenticator App, I risk losing access to several of my accounts, I wanted a better solution. With Google Authenticator if I brick my phone (which I have done a few times before), the codes it provides are lost for good. I decided it was time to take the plunge and get a Yubikey.

Or rather two YubiKeys...

The first thing to learn about YubiKeys (and other security keys) is that you really need at least 2. You don't want to have only one that provides access to all your accounts in case you break or lose it. There is just too much risk of being locked out of your own accounts (which could be a positive if the risk of somebody with physical access to your stuff gaining access to your key was greater than losing your own access – i.e. probably very few of us). Of course, setting up two keys adds a major constraint because you need to have them both physically present when you register them as a 2FA device, and then you need to be able to get the hot spare when you lose / break the other. Also it is worth considering where to keep the second key, as each option has significant trade-offs.

  • Husband/wife/family member
    • Easy and simple option
    • Might be risky if you both end up losing / breaking device in the same incident
    • Could enable you to allow family to access your accounts should anything happen to you
  • Personal Safe
    • Almost none are actually able to survive house-fires
    • Expensive
    • If you are still protected by password, who are you defending yourself from?
      • Nation States and law enforcement could likely access this anyway.
      • Thieves should still not have the passwords to your accounts, so no loss
      • Cyber criminals (the main group YubiKeys help ward off), hopefully aren't coming into your home
  • Bank safe deposit box
    • Bond movie levels of cool
    • Pricey
    • Increasingly phased out by banks
    • Difficult to access
    • Do your own research on this, but I wouldn't recommend – there are some surprising stories of people still losing all the stuff in them, in spite of the supposed security
  • Car
    • Potentially able to get it away from a situation where you house is burning down
    • Car thief still doesn't have your passwords so probably no great risk

The above examples are just a playful brainstorm, but after the California Wildfires, many people have questioned their digital backup plans immensely as some people have literally had their house, including safes and even local banks, all burn to the ground.

Luckily (for most people), the majority of services will be able to return access to you, resetting your 2FA – upon proving your identity somehow. This account reset/restore option is often just your email address, so it is important that you ensure your main or backup email accounts are well-secured, and often you can have additional second factors like mobile phone numbers. It is probably worth using this, in spite of the lower security of SMS two-factor.

Main Types of Auth

U2F

This is excellent. You get all the benefits of the security key, with a pretty seamless user experience baked into most browsers, phones and computers. You just touch or insert and tap the key. Often you can easily register multiple hardware keys to your account, so setting up your backup key(s) can be done easily.

TOTP

One time pass-codes are the most common form of secure 2FA. Many services support this, and there is a Yubico Authenticator app (and desktop version) that will enable you to store them on your key, and access them by inserting via USB or touching the key to phone. The biggest hassle is that for a backup you really need to scan the 2FA setup barcode twice, so you can add the code to your backup key as well as your main key. It is easy to do this with the app, but you have to remember to do it before moving to the next step, as they will never show you that code again. An alternative is to store that setup code in a password manager, but that will (to some extent) weaken your security, as if a hacker gets access to your password manager, they will then have both your 2FA and account passwords – and so it at least partially reduces the security you get from 'something you have' not being online.

Local Machine

While more for advanced users / secure workplaces, you can also set up your computer to require your key to unlock, access the admin etc. I must confess I've only done it on Linux, and it was as simple as installing a couple of packages and editing a couple of lines of certain authentication management files (with instructions from a guide Yubico provided online). As with all the above, but with even greater emphasis: make sure to have a backup key added, as losing access to your computer can be an extremely annoying occurrence, that might result in your having to re-install the operating system from scratch.

If you enable this, you are much less exposed to people in your office / house being able to access your machine while you briefly step away, and even if they have watched you type your password, or recorded it with a keylogger – they need the key and the password, so it at least makes it much more difficult to obtain both parts required to log in to your machine.

A big plus is that you can also mitigate the action movie risk scenario that iris and fingerprint auth can be accessed by cutting out/off those body parts. At least handing over your YubiKey is less painful.

Remote Machine (SSH)

If you don't know what this is, it is probably not useful to you, but it is possible to use a YubiKey as an OpenPGP Smartcard and generate PGP keys inside the key (so the master key can never* leak), or you can put your own subkeys on the security key, and then you can use GPG-Agent to enable using it for SSH. The benefit of this is that if you put the same PGP keys on both your keys, they can both be used for SSH access (although you need to run a command to reset the agent to switch between keys on same machine), but now you have a hot SSH spare at least, so you can still access your servers.

You need to unlock the key with a pin, so it is not trivial somebody to steal the key and gain SSH access. and there are only 3 attempts. You can unlock with an admin pin – but if you fail that 3 times, you have no choice but to fully reset OpenPGP on the card. Going even more technical, if you have an additional SSH hop, it is worth mentioning that SSH agent-forwarding is complex, with various security risks, so I won't cover it here – but it is possible to achieve.

This should now mean that you can set up ssh access to your services on any machine without having to transfer your private key file. You just need to set-up gpg-agent.

I'll go more into my struggles with the PGP stuff in an article about encryption, email and YubiKeys soon. It is certainly still .

Conclusion

I do think that the benefits of YubiKeys outweigh the hassle, and are a good (albeit expensive) way to reduce your surface area for identity theft and protect your accounts. They are complementary to a good password manager (which if you don't have, I'd recommend setting up first) – as what you really want to prevent is one account hack leading to multiple compromised accounts, so using the same password anywhere is ill-advised, and the addition of a 2FA will then further reduce the risks of mass-compromise.

Servers can still be hacked, so security keys don't actually increase the trust you should place in websites and apps that you give your data to, however they are a great defence against increasingly common and sophisticated phishing attacks, and as cyber criminals get more advanced, this is a strong step in limiting the ways that they are able to target you. I like to compare this sort of security with bike locks. You cannot buy a perfect (yet practical) bike lock – but you can have a better lock than other bikes on the street. Most criminals are going for the easiest wins, and simply by making yourself a more difficult target, you greatly reduce the number of people who would/could bother.

Finally, the less technical the person – the greater the risk of phishing and poor passwords – so if you can help someone who you think would likely click on malicious email links set up and use 2FA for the most crucial accounts, you probably should. U2F is genuinely quite easy to use, and simply requires tapping the key against your phone, or shoving it in the USB slot and touching it.

* hopefully

So you've got a big client side React App, and you've decided to use code splitting to reduce its size on initial page load. Should be easy. React has some very accessible documentation regarding their lazy component loading, which works well with WebPack code splitting.

They give a really trivial example:

import React, { Suspense } from 'react';

const OtherComponent = React.lazy(() => import('./OtherComponent'));

function MyComponent() {
  return (
    <div>
      <Suspense fallback={<div>Loading...</div>}>
        <OtherComponent />
      </Suspense>
    </div>
  );
}

Now, this approach works* out of the box, and the React team has been doing some excellent work exploring how the UI and UX around loading can be improved, so I'm very much excited by where this is going.

* But is painfully insufficient for production.

The main gotcha is that it can (and does) fail. The internet can be a fickle beast, especially with mobile connectivity. While the simple React example above does technically provide a mechanism to dynamically load your components, it doesn't fulfill the same contract as actually importing your code directly. Indeed not only is it possible that it can't load, but it's possible it might never load (maybe user has now gone out of service, server crashed, deployed a new version and they are unlucky trying to still get the old one).

These various complexities call for some attempted solutions.

Alerting the user to failure

Luckily the import() function returns a promise, so we could define a catch-all helper function to ensure we handle errors at least a bit.

export const errorLoading = err => {
  toastr.error(
    'An error has occured',
    'Please refresh the page. Otherwise, it will refresh automatically in 10 seconds.'
  );
  setTimeout(() => window.location.reload(), 10000);
};

So now the initial example would have have to add the catch-all:

const OtherComponent = React.lazy(() => import(./OtherComponent').catch( errorLoading));

That's already slightly better than an uncaught exception. Of course there are many approaches to the UX. You could for example use this to trigger an error boundary, or redirect to a server error page (some of the guidance from Google on SEO says that if you redirect to a server error code page, it helps them to understand).

However, it's still a bit too early to give up, surely there is a better way...

Retrying on failure to load

So one of the main things we could do, is to simply retry. Below is an example, where you can pass a function with configurable retries and delays before giving up – that itself returns a promise that will eventually resolve.

function delayPromise(promiseCreator, delayMs) {
  return new Promise(resolve => {
    setTimeout(resolve, delayMs);
  }).then(promiseCreator);
}

export function promiseRetry(promiseCreator, nTimes = 3, delayMs = 1500) {
  // Retries nTimes with a delay of delayMs on before subsequent retries
  return new Promise((resolve, reject) => {
    let promise = promiseCreator();
    for (let i = 1; i < nTimes; i++) {
      promise = promise.catch(() => delayPromise(promiseCreator, delayMs));
    }
    promise.catch(reject).then(resolve);
  });
}

So with the above our initial code would look more like this:

const OtherComponent = React.lazy(() => import('./OtherComponent').catch( errorLoading));

That's a vastly better experience than the initial implementation. I'm using similar code in production, and we've stopped getting errors like Loading chunk 2 failed;

Using with Redux

It's also possible to use this approach with Redux (assuming you have implemented lazy loadable reducers and sagas or whatever else you use:

const OtherReduxComponent = lazy(() =>
  promiseRetry(() => Promise.all([import('./OtherReduxComponent/reducer'),  import('./OtherReduxComponent/sagas'), import('./OtherReduxComponent')]))
    .then(([reducer, sagas, component]) => {
      injectReducer('otherReduxComponent', reducer.default);
      injectSagas('otherReduxComponent', sagas.rootSaga);
      return Promise.resolve(component);
    })
    .catch(errorLoading)
);

It would also be possible to retry each of the above promises individually instead of failing if one of them fails, but actually due to the fact that they are cached if successful, the difference won't save much bandwidth as they won't keep making network requests once they have succeeded.

Conclusion

So the above has explored how it could be possible to make react lazy loading more resilient, and improve the user experience while delivering a chunked react app. There is still a lot more that can be done – and I feel that React should probably invest some time into documenting in more detail how to do this, and perhaps provide a function that can do something like promiseRetry() above. Trying a network request a maximum of one time is always liable to fail, and by tying your app to large numbers of asynchronous calls, just to load the UI – you run the high risk of random failure. It's possible that you could have two versions of the app in the wild too during a deploy, and that case is particularly important to handle.

The best user experience could either be full reload now or: keep retrying, and it's not clear which approach is the one we should be optimizing for without measuring which situation occurs most frequently. Tuning the number of times to retry, and the delay between retries could be night and day between failing often, taking way longer than needed and finding the sweet spot. Crucially some situations exist when retrying will always fail – so there has to be a sane limit on retries.

There are also many users who hate code splitting – as it makes the browsing of sites slower (but SEO requiring the fastest possible initial page load makes this problem somewhat intractable with client side React). I have thoughts on this too, but to be honest, I'm unsure that a consensus exists on this subject – and I'll keep those to myself for now.

Ultimately for people with large web apps who wish to do code splitting – React lazy, suspense and the fallback UI possibilities have lead to a solid mechanism to split code, and control the points where the splitting occur, so that they are in sensible places. If only the provided mechanism was more robust it would be great, but in the current state (as far as I am aware) – that robustness is up to you, so thinking you can just swap to lazy() everywhere without problems is naive.

If you have thoughts, comments or feedback get in touch via info at sammorrowdrums.con I'm happy to correct anything I may have got wrong – or update the approach if there are better alternatives.

Ruffle Logo and Description

This year AOC went a little differently. I was thinking that after a year of not using Rust, I might like to try again and see if I could improve on last year, but after completing 8 days of problems I felt a bit less “Rusty”...

I had recently seen mention of Ruffle – a new runs-in-browser plugin-free Flash emulator – and the fact that Flash was dying in the new year, and thought I'd like to help the effort. I grew up with places like Newgrounds providing games and animated videos that were free to access, and things like Radiskull & Devil Doll, which frankly I'm still referencing today.

I cloned the repo, began looking through issues on Github and started hacking:

  • First to enable a desktop player file-picker so less technical users would be able to get up and running
  • Then an auto http –> https url conversion option (on by default) to stop CORS issues (and in some cases fix old links where http no longer works).
  • Lastly (for now), I added a warning on web and desktop for unsupported content, and the ability to show warnings for any reason (such as failed link opening)

Hopefully I'll continue to contribute to Ruffle, but I'm so grateful that my foray into Advent of Code lead to me becoming proficient enough at Rust to start making open source contributions.

I was also very excited to get my hands dirty with WASM, and discover the possibilities (and challenges) of building a Rust program for the web. I have found the tooling, the compile error messages and the linking with JavaScript easier than expected. Naturally there are some difficulties when it comes to types (especially as you cannot use Rust types in JS, and JS types are not very “rust”) – but certainly it enables kinds of processing that were limited to plugins only in the past (mostly due to speed) – and Flash has many parts like animating vector graphics, live interaction, and some embedded movies and things that make speed very critical.

As 2020 was for most of us a challenging year, I am glad to have finished it with something as optimistic as preserving the legacy online art created with Flash, and with that much of my childhood inspiration.

The Goal

I've been looking for nice ways to publish content that interact seamlessly between my local environment and web, I wanted a way to got off Medium, and ultimately to write with as little distraction as possible. Write.As provides a lot of positives for me, and is a lot easier than maintaining a custom blog.

I think the bottom line for me though, is that blogging in a web browser makes procrastination too easy, and my will is weak. To avoid this I have experimented a little with alternatives, and finally found a setup I actually like.

Trying to go native with Write.As

When I first tried to use write.as, they did have a native app, and CLI tool available on Elementary OS, and so I attempted compose markdown using their app, and then connect it to a personal blog...

Unfortunately at the time both tools could only support publishing as an anonymous, unknown user. Also, the native app did not (and still doesn't) support Markdown preview – or at least syntax highlighting.

The Solution

Fortunately the CLI now supports publishing to blogs:

writeas publish --font sans -b sammorrowdrums ~/Documents/Blog/blogging-with-quilter.md

This has enabled me to re-visit Quilter – a native app markdown editor (available on EOS) that features a simple distraction free UI and syntax highlighting of MD, and the ability to preview.

Blogging with Quilter

The Future

Finally I am able to blog, saving my files locally first, in a native app, and then publish easily when ready. I know there are plans to enhance the Write.As native app, so I'm not ruling out using it in the future, but at least I have a workflow that can keep me away from the web browser for now.

* I know that I could just copy and paste markdown into the Write.As website – I just don't want to.

* The —font sans is important, because the default is to upload the blog in monospace font, which is a poor choice for normal blogs, although useful for sharing code, and ascii art etc.

I had no plans to try Advent of Code this year, or to learn Rust, but I saw a thread on Hacker News and then I started AoC, and then I saw another comment a couple of days in:

After that there was no turning back, I was committed. I was going to do it all in Rust.

  • First things first, I found out how to install Rust.
  • Set up my text editor
  • Began going through their wonderful book (which I'm now about half way through).
  • Struggled through the first two problems (that I'd already solved in Python).
  • Fought the borrow checker, and began to learn the standard library and type system (efforts ongoing).
  • Started building out to modules.
  • Used their integrated testing tooling (which is great!).
  • Learned to output graphics onto the screen – to solve some of the challenges.

It's been almost two crazy weeks of late nights of panic!("{:?}", reason) and struggle, but considering the power of the language I've really grown to enjoy it.

There is excellent documentation, which comes alive as soon as you understand the type system well enough to read it. The compiler errors are great, and in particular things like match statements giving warnings if you don't cover all possibilities – really helpful. Using paradigms like Some(x) None rather than allowing null values, also all fantastic.

I think the only thing that takes a significant effort at first is to understand Lifetimes – how they handle the cleanup of memory without manual memory management (like C++) or garbage collection (like Python/Golang/JS). The payoff is huge in terms of fast execution, and memory safety but there are things that I just could not grok in a day. To really get the most out of the language you have to own it (or maybe borrow it... hmmm).

The Advent of code itself has been so cool. I have so much love for the creativity, puzzle creation and interesting references like Quines – and for problems that force me to relive my geometry days from school suddenly breaking out the atan2 function to convert vectors to degrees so I can destroy asteroids! There are plenty of hilarious video links and references mixed into the story they weave with each puzzle too.

If you are thinking about using AoC to learn a language – you should! I cannot pretend that it's been easy for me, but it's been great fun. I would certainly consider doing it again and I hope to use a lot more Rust in 2020!

p.s. You should ask me how proud I am of my little IntCode VM. Everyone should have one.

Enter your email to subscribe to updates.