Last updated: May 12, 2021 04:22 AM (All times are UTC.)

May 11, 2021

May 07, 2021

Reading List 276 by Bruce Lawson (@brucel)

(Last Updated on )

May 06, 2021

I’m signed up to the e-mail newsletter of a local walkway.

Exciting, I know. This monthly newsletter boasts a circulation of roughly 250 keen readers and it comes full of pictures of local flowers, book reviews, and that sort of thing.

This newsletter comes to me as a PDF from the personal email account of the group secretary. My email address in tucked away in the BCC field. As it happens, Gmail lets you send an email to up to 500 people at once.

The point of this is that there’s no obsession over things like…

  • what happens when I have ten million subscribers?
  • why hasn’t jsrn opened one of my emails in a while?
  • am I using the right newsletter provider? Is it one of the cool ones?

I can only imagine the list of subscribers is kept in a spreadsheet somewhere on the secretary’s computer.

I hope he has a backup.

May 05, 2021

On industry malaise by Graham Lee

Robert Atkins linked to his post on industry malaise:

All over the place I see people who got their start programming with “view source” in the 2000s looking around at the state of web application development and thinking, “Hey wait a minute, this is a mess” […] On the native platform side, there’s no joy either.

This is a post from 2019, but shared in a “this is still valid” sense. To be honest, I think it is. I recognise those doldrums myself; Robert shared the post in reply to my own toot:

Honestly jealous of people who are still excited by new developments in software and have gone through wondering why, then through how to get that excitement back, now wondering if it’s possible that I ever will.

I’ve spent long enough thinking that it’s the industry that’s at fault to thinking it’s me that’s at fault, and now I know others feel the same way I can expand that from “me” to “us”.

I recognise the pattern. The idea that “we” used to do good work with computers until “we” somehow lost “our” way with “our” focus on trivialities like functional reactive programming or declarative UI technology, or actively hostile activities like adtech, blockchain, and cloud computing.

Yes, those things are all hostile, but are they unique to the current times? Were the bygone days with their shrink-wrapped “breaking this seal means agreeing to the license printed on the paper inside the sealed box” EULAs and their “can’t contact flexlm, quitting now” really so much better than the today times? Did we not get bogged down in trivialities like object-relational mapping and rewriting the world in PerlPHPPython?

It is true that “the kids today” haven’t learned all the classic lessons of software engineering. We didn’t either, and there will soon be a century’s worth of skipped classes to catch up on. That stuff doesn’t need ramming into every software engineer’s brain, like they’re Alex from A Clockwork Orange. It needs contextualising.

A clear generational difference in today’s software engineering is what we think of Agile. Those of us who lived through—or near—the transition remember the autonomy we gained, and the liberation from heavyweight, management-centric processes that were all about producing collateral for executive sign-off and not at all about producing working software that our customers valued. People today think it’s about having heavyweight processes with daily status meetings that suck the life out of the team. Fine, things change, it’s time to move forward. But contextualise the Agile movement, so that people understand at least what moving backward would look like.

So some of this malaise will be purely generational. Some of us have aged/grown/tired out of being excited about every new technology, and see people being excited about every new technology as irrelevant or immature. Maybe it is irrelevant, but if so it probably was when we were doing it too: nothing about the tools we grew up with were any more timeless than today’s.

Some of it will also be generational, but for very different reasons. Some fraction of us who were junior engineers a decade or two ago will be leads, principles, heads of division or whatever now, and responsible for the big picture, and not willing to get caught into the minutiae of whether this buggy VC-backed database that some junior heard about at code club will get sunset before that one. We’d rather use postgres, because we knew it back then and know it now. Well, if you’re in that boat, congratulations on the career progression, but it’s now your job to make those big picture decisions, make them compelling, and convince your whole org to side with you. It’s hard, but you’re paid way more than you used to get and that’s how this whole charade works.

Some of it is also frustration. I certainly sense this one. I can pretend I understood my 2006-vintage iBook. I didn’t understand the half of it, but I understood enough to claim some kind of system-level comfort. I had (and read: that was a long flight) the Internals book so I understood the kernel. The Unix stuff is a Unix system, I know this! And if you ignore classic, carbon, and a bunch of programming languages that came out of the box, I knew the frameworks and developer tools too. I understood how to do security on that computer well enough that Apple told you to consider reading my excellent book. But it turns out that they just wouldn’t fucking sit still for a decade, and I no longer understand all of that technology. I don’t understand my M1 Mac Mini. That’s frustrating, and makes me feel stupid.

So yes, there is widespread malaise, and yes, people are doing dumb, irrelevant, or evil things in the name of computering. But mostly it’s just us.

As the kids these days say, please like and subscribe.

May 04, 2021

Naming things by Graham Lee

My current host name scheme at home is characters from the film Tron. So I have:

Laptop: flynn (programmer, formerly at Encom, and arcade owner)

Desktop: yori (programmer at Encom)

TV box: dumont (runs the I/O terminal)

Watch: bit (a bit)

Windows computer: dillinger (the evil corporate suit)

May 01, 2021

You don’t change the world by sitting around being a good person. You change the world by shipping products and making money.

As I wrote in my seminal management book Listen to me because I’m rich, white and clever, IBM wouldn’t have made a shitload of money in wartime Europe if they’d engaged in endless navel-gazing about politics. Their leadership told the staff to Stop Running in Circles and Ship Work that Matters, and get on with compiling a list of people with funny names like “Cohen” or “Levi”.

So here at Brucecamp, we’ve decided that it’s best if our productbots (formerly: employees) do not discuss the sausage machine while we push them into the sausage machine. As I wrote in our other book It doesn’t have to be full of whimpering Woke retards at work, “if you don’t like it, well, there’s the door. Enjoy poverty!”. And that’s all we have to say on the matter. Until the next blogpost. Or book.

In other news, Apple are wankers and I bought a sauna.

April 28, 2021

On UML by Graham Lee

A little context: I got introduced to UML in around 2008, at an employer who had a site licence for Enterprise Architect. I was sent on a training course run by a company that no longer exists called Sun Microsystems: every day for a week I would get on a coach to Marble Arch, then take the central line over to Shoreditch (they were very much ahead of their time, Sun Microsystems) and learn how to decompose systems into objects and represent the static and dynamic properties of these objects on class diagrams, activity diagrams, state diagrams, you name it.

I got a bye on some of the uses of Enterprise Architect at work. Our Unix team was keeping its UML diagrams in configuration management, round-tripping between the diagrams and C++ code to make sure everything was in sync. Because Enterprise Architect didn’t have round-trip support for Objective-C (it still doesn’t), and I was the tech lead for the Mac team, I wasn’t expected to do this.

This freed me from some of the more restrictive constraints imposed on other UML-using teams. My map could be a map, not the territory. Whereas the Unix folks had to show how every single IThing had a ThingImpl so that their diagrams correctly generated the PImpl pattern, my diagrams were free to show only the information relevant to their use as diagrams. Because the diagrams didn’t need to be in configuration management alongside the source, I was free to draw them on whiteboards if I was by a whiteboard and not by the desktop computer that had my installation of Enterprise Architect.

Even though I’d been working with Objective-C for somewhere around six years at this point, this training course along with the experience with EA was the thing that finally made the idea of objects click. I had been fine before with what would now be called the Massive View Controller pattern but then was the Massive App Delegate pattern (MAD software design, if you’ll excuse the ablism).

Apple’s sample code all puts the outlets and actions on the app delegate, so why can’t I? The Big Nerd Ranch book does it, so why can’t I? Oh, the three of us all editing that file has got unwieldy? OK, well let’s make App Delegate categories for the different features in the app and move the methods there.

Engaging with object-oriented design, as distinct from object-oriented programming, let me move past that, and helped me to understand why I might want to define my own classes, and objects that collaborate with each other. It helped me to understand what those classes and objects could be used for (and re-used for).

Of course, these days UML has fallen out of fashion (I still use it, though I’m more likely to have PlantUML installed than EA). In these threads and the linked posts two extreme opinions—along with quite a few in between—are found.

The first is that UML (well not UML specifically, but OO Analysis and Design) represents some pre-lapsarian school of thought from back when programmers used to think, and weren’t just shitting javascript into containers at ever-higher velocities. In this school, UML is something “we” lost along when we stopped doing software engineering properly, in the name of agile.

The second is that UML is necessarily part of the heavyweight waterfall go-for-two-years-then-fail-to-ship project management paradigm that The Blessed Cunningham did away with in the Four Commandments of the agile manifesto. Thou shalt not make unto yourselves craven comprehensive documentation!

Neither is true. “We” had a necessary (due to the dot-com recession) refocus on the idea that this software is probably supposed to be for something, that the person we should ask about what it’s for is probably the person who’s paying for it, and we should probably show them something worth their money sooner rather than later. Many people who weren’t using UML before the fall/revelation still aren’t. Many who were doing at management behest are no longer. Many who were because they liked it still are.

But when I last taught object-oriented analysis and design (as distinct, remember, from object-oriented programming), which was in March 2020, the tool to reach for was the UML (we used Visual Paradigm, not plantuml or EA). It is perhaps not embarrassing that the newest tool for the job is from the 1990s (after all, people still teach Functional Programming which is even older). It is perhaps unfortunate that no design (sorry, “emergent” design) is the best practice that has replaced it.

On the other hand, by many quantitative metrics, software is still doing fine, and the whole UML exercise was only a minority pursuit at its peak.

April 27, 2021

15 by Luke Lanchester (@Dachande663)

Something happened in February that I didn’t give enough attention to at the time. This site, HybridLogic, turned 15. A lot has changed in that time. Both on-line, and off.

Services have come and gone. FictionPress gave way to Twitter. Twitter to Reddit. Reddit to Medium and then back. WordPress has stayed ever-present. This blog has run some incantation of that venerable PHP app since it’s earliest days.

Hardware has seen a slow march through Apple’s line-up. I’ve dipped a toe back into Windows, always returning due to the fractious nature that OS seems to have with its “users”. On servers in the The Cloud and in the bedroom, I’ve mostly stuck with Debian, often Ubuntu. Simple, reliable. My fingers know exactly where to cd too, to get to where I want.

And this blog has remained through it all. Maybe it’ll change. A lick of paint. A new back-end. More content, links, opinions, reviews, guides. Me. For 15 years, has been my little corner of the web.

Hopefully it’ll make it to another 15.

April 24, 2021

Or rather, I do use version control when I’m writing, and it isn’t helpful.

I’m currently studying a PhD, and I have around 113k words of notes in a git repository. I also have countless words of notes in a Zotero database and a Remarkable tablet. I don’t particularly miss git when I’m not storing notes in my repository.

A lot of the commit messages in that repository aren’t particularly informative. “Update literature search”, “meeting notes from today”, “meeting notes”, “rewrite introduction”. So unlike in software, where I have changes like “create the ubiquitous documents folder if it doesn’t already exist” and “fix type mismatch in document delegate conformance”, I don’t really have atomic changes in long-form writing.

Indeed, that’s not how I write. I usually set out either to produce an argument, or to improve an existing one. Not to add a specific point that I hadn’t thought of before, not to improve layout or structure in any specific way, not to fix particular problems. So I’m not “adding features” or “fixing bugs” in the same atomic way that I would in software, and don’t end up with a revision history comprising multiple atomic commits.

Some of my articles—this one included—have no checkpoints in their history at all. Others, including posts on De Programmatica Ipsum and journal articles, have a dozen or more checkpoints, but only because I “saved a draft” when I stepped away from the computer, not because there were meaningful atomic increments. I would never revert a change in an article when I’m writing, I’d always fix forward. I’d never introduce a new idea on a branch, there’s always a linear flow.

April 23, 2021

Reading List 275 by Bruce Lawson (@brucel)

April 21, 2021

April 20, 2021

We are uncovering better ways of developing
software by doing it and helping others do it.

It’s been 20 years since those words were published in the manifesto for agile software development, and capital-A Agile methods haven’t really been supplanted. Despite another two decades of doing it and helping others do it.

That seems problematic.

April 19, 2021

April 16, 2021

I’ve spent about a year working on an app for a group in the University where I work, that needed to be available on both Android and iOS. I’ve got a bit of experience working with the Apple-supplied SDKs on iOS, and a teensy amount of experience working with the Google-supplied SDKs on Android. Writing two apps is obviously an option, but not one I took very seriously. The other thing I’ve reached for before in this situation is React Native, where I’ve got a little experience but quite a bit of understanding having worked with React some.

Anyway, this project was a mobile companion for a desktop app written in C# and Windows Forms, and the client was going to have to pick up development at the end of my engagement. So I decided that the best approach for the client was to learn how to do it in Xamarin.Forms, and give them a C# project they could understand at the end. I also hoped there’d be an opportunity to share some code from the desktop software in the mobile project, though this didn’t pan out in the end.

It took a while to understand the centrality of the Model-View-ViewModel idea and how to get it to work with the code I was writing, rather than bludgeoning it in to what I was trying to do. Ultimately lots of X.F works with data bindings, where you say “this thing and that thing are connected” and so your view needs a that thing so it can display this thing. If the that thing isn’t in the right shape, is derived somehow, or shouldn’t be committed to the model until some other things are done, the ViewModel sits in the middle and separates the two.

I’m used to this model in a couple of contexts, and I’ll give Objective-C examples of each because that’s how old I am. In web applications, you can use data bindings to fill in parts of an HTML document based on values from a server-side object. WebObjects works this way (Rails doesn’t, it uses code to fill in parts of etc). The difference between web app data bindings and mobile app data bindings is one of lifecycle. Your value needs to be read once when the page (or XHR) is rendered, and stored once when the user posts the changes. This is so simple that you can use straightforward accessor methods for it. It also happens at a time when loading new content is happening, so any timing constraints are likely to come from elsewhere.

You can also do it in what, because I’m that old, I’ll call rich client applications, like Xamarin.Forms mobile apps or Cocoa Bindings desktop apps. Here, anything could be happening at any time: a worker thread could be updating the model, and the user could interact with the UI, all at the same time, potentially multiple times while a UI element is live. So you can’t just wait until the Submit button is pressed to update everything, you need to track and reflect updates when they happen.

Given a dynamic language like Objective-C, you can say “bind this thing to that thing with these options” and the binding library can rewrite your accessors for this thing and that thing to update the other when changes happen, and avoid circular updates. You can’t do that in C# because apparently more typing is easier to reason about, so you end up replicating the below pattern rather a lot.

public class MyThingViewModel : INotifyPropertyChanged
  public event PropertyChangedEventHandler PropertyChanged;
  // ...
  private string _value;
  public string Value
    get => _value;
      _value = value;
      PropertyChanged?.Invoke(this, new PropertyChangedEventArgs(nameof(Value)));

And when I say “rather a lot”, I mean in this one app that boilerplate appears at least 126 times, this undercounts because despite being public, the PropertyChanged event can only be invoked by instances of the declaring class so if a subclass adds any properties or any change points, you’re going to write protected helper methods to be able to invoke the event from the subclass.

Let’s pivot to investigating another question: why is Cocoa Bindings a desktop-only thing from Apple? I’ve encountered two problems in using it on Xamarin: thread confinement and performance. Thread confinement is a problem anywhere but the performance things are more sensitive on mobile, particularly on 2007-era mobile when this decision was made, and I can imagine a choice was made between “give developers the tools to identify and fix this issue” and “don’t give developers the chance to encounter this issue” back when UIKit was designed. Neither X.F nor UIKit is wrong in their particular choice, they’ve just chosen differently.

UI updates have to happen on the UI thread, probably because UIKit is Cocoa, Cocoa is appkit, and appkit ran on an OS that didn’t give you an easy way to do multiple threads per task. But this has to happen on Android too. And also performance. Anyway, theoretically any of those 126 invocations of PropertyChanged that could be bound to a view (so all of them, because separation of concerns) should be MainThread.BeginInvokeOnMainThread(() => {PropertyChanged?.Invoke(...)}); because what if the value is updated in an async method or a Task. Otherwise, something between a crash and unexpected behaviour will happen.

The performance problem is this: any change to a property can easily cause an unknown amount of code to run, quite often on the UI thread. For example, my app has a data grid (i.e. spreadsheet-like table view) with a “selection” column containing switches. There’s a “select all” button, and a report of the number of selected objects, outside the grid. Pressing “select all” selects all of the objects. Each one notifies observers that its IsSelected property has changed, which is watched by the list view model to update the selection count, and by the data grid to update the switches. So if there’s one row in the grid, selecting all causes two main-thread UI updates. If there are 500 rows, then 1000 updates need to run on the main thread in response to that one button action.

That can get slow :). Before I understood how to fix this, some UI actions would block the UI for tens of seconds as they computed the update. I asked about this in some forums and was told the answer is “your users shouldn’t have that much data in a mobile app, design an app with less data” which is not that helpful. But luckily the folks over at SyncFusion were much more empathetic, and told me the real solution is to design your views and view models such that you can turn off updates while you’re doing some big change, then turn them back on and recalculate the state at the end.

Like I say, it’s likely that someone at Apple already knew this from the Cocoa Bindings times and decided “here’s a great technology, and here’s how to turn it off because it will get in your way” wasn’t a cool story.

April 15, 2021

April 13, 2021

April 12, 2021

A statement from our CEO and Founder, Bruc:

At Facebruce, we strongly disapprove of the recent data leak of 50 million account details. There’s nothing more important to us than your data. Really, nothing. Have you any idea of how much we could have charged people for the information about you that is now out there, available for free, on Torrent sites and on Russian servers?

We had a deal almost signed to show messages to all people who fast during Ramadan, saying “Want some free money? Just send us your home address!”, paid for by “Patriots for the Second Amendment and Jesus”. Of course, it isn’t the money that drives us, it’s that Facebruce is facilitating community by introducing two groups. At Facebruc, we love spreading love and connection, so need to raise a little money to run the service.

So, please, trust us with your data, and click ‘Like’ to keep our engagement figures riding high as our share price!

Next on feed: LGBT+ folks! Send us your address to get a free Rainbow Pride t-shirt! (sponsored by Westboro Baptists)

April 07, 2021

When not confined to my house during a pandemic I am usually out and about. This. can be anything from just. wandering about with friends, to galleries/museums (V& A and Hayward are my faves). I amlucky in my job that I get to see a lot of theatre and my favourite. thing to do on […]

April 06, 2021

April 01, 2021

I’d like to start by recapping the three distinct categories of interest in software freedom. This is definitely my categorisation, though only the third is novel and the first two have long histories of common recognition so this is hardly Humpty-Dumptyism on my part.

Free Software
The extension of freedoms of expression and engagement into the digital space. Free Software, sometimes “Libre Software” because of the confusion over the word “Free”‘s multiple definitions, is based on the ideas that a computer is property like any other artefact and that working with, playing with, and socialising via computers are personal pursuits like any other pursuits, and that the freedom from external interference with those enjoyments should be the same as in non-computer interests.
Open Source
The rephrasing of the ideas of Free Software to improve acceptance in (particularly American) business circles. Open Source as described is almost identical to the Debian project’s ideas of Free Software, but with the words “Open Source” instead of “Free Software” and the words “Debian software component” removed. The first reason for the rename is that Freedom implies either zero cost, which mid-1990s American business didn’t like, or social good, same. The second reason is that mid-1990s American businesses had come around to ideas of interoperability under the banner Open Systems, and Open Source sounds sort of like that a bit.
Open Sores
The co-opting of the technical aspects of Open Source (or, nearly equivalently, Free Software) without any of the freedom benefits, typically with the goal of providing zero-cost software development and associated professional services to for-profit companies. When a company CTO says “we love open source”, they typically mean that they love open sores: that they love how skilled developers from across the world will gladly sign CLAs transferring rights to exploit their creations to the company in return for a lighter green square on the proprietary software-as-a-service platform Github.

This is all pre-amble to a discussion of #uninstallman, the internet pressure mob removing the leadership of the Free Software Foundation over objectionable statements made by its founder, former president, and recently-surprise-reinstated board member, Richard M. Stallman (rms).

Let’s start with the obvious: the Free Software Foundation has not demonstrated good leadership over this matter. Clearly rms’s statements have distracted the conversation away from software freedom, and the FSF have not taken enough steps with enough publicity to resolve this issue and to get people talking about software freedom again[*]. The FSF has not even given clear enough separation between their policy and rms’s personal views for it to be obvious that anyone else on their board has any views, or control over policy.

It is right that the FSF take a critical look at their management, and ask whether the people who are leading the Foundation are the best people to promote the idea of software freedom.

Unfortunately, we are now at the point where whatever the outcome, software freedom has lost and Open Sores will fill the ideological vacuum. Because if software freedom is about the extension of existing freedoms into the online space, and the baying mob are calling for the blood of someone who said a thing questioning the definitions of words related to the actions of someone who was associated with someone who did known bad things, and for the blood of any other people who are associated with that person, it is easy to argue that the whole software freedom movement is hypocritical. You claim to support freedom of expression, and yet you actually deny the right for anyone to express views that disagree with your own? Where’s the unity of purpose?

Bradley Kuhn, policy fellow at the Software Freedom Conservancy, has talked about the damaging impact of rms’s personal views on the software freedom movement back in 2019, when this controversy was fresh; and in 2018 which is arguably where it started (to become public). He has also talked about the need to maintain a big tent; that being principled on your core issue gives you the legitimacy to take principled stands on other issues.

Taking an authoritarian, only-say-what-I-permit line on expression doesn’t leave any legitimacy to support freedom of expression in the software field. Unfortunately, if the FSF and more generally the software freedom community is unable to maintain principle on this argument, it will lose the right to be taken seriously on matters of software freedom. And then, the organisations who take the Open Sores line on software licensing will step up to fill the leadership vacuum. The business interest “Foundations” who think that software freedom means the freedom for big businesses to control the revenue stream while everybody gets to build their products for free. And then it may be decades before there is another software freedom movement with any legitimacy, and they may have to start from scratch.

[*] Arguably the software freedom movement was already in a difficult state, because the freedoms proposed were only really adopted by a small community with a technical interest in the details relating to those freedoms: a few tens of thousands of technologists, some intellectual property lawyers, and a small number of others. But that’s more about the difficulty of developing a mass movement and of translating the theory into activism, and doesn’t necessarily reflect badly on the characters or actions of any of the leaders in the movement.

Reading List 274 by Bruce Lawson (@brucel)

March 26, 2021

March 25, 2021

One person per task by Graham Lee

One of the least teamy things I see with software teams is limiting the maximum and minimum number of items of work in process – tasks, stories, whatever you call them – both to the number of developers on the team. For some reason it’s always the number of devs, never the number of product owners, customers, QAs, or deployment people. Got four devs? Then there should be four tasks in process!

This approach is surprisingly backward given that we’re all supposed to have come so far as the leaders of the Agile Fourth Industrial Revolution 2.0 that we’ve internalised and transcended Goldratt’s theory of constraints. It’s the last holdout of the old Taylorian school of management. Everybody is working full-tilt, so if anyone runs into any trouble then everybody else is too busy to help them. If what they’re doing is upstream of anybody else’s work, then they are going to be blocked too, but rather than fix the blockage they’ll pull another task because one-person-per-task at all times!

So much is this at the core of software team thinking that when I’ve suggested in informal discussions that maybe we should do something else, people are confused. Are you saying that we should have a developer who isn’t assigned to a task, just in case? What does that person do the rest of the time, play Minesweeper? As if the only alternative to “one person per task” is “one person per task but perhaps there is another person”.

One person per task has the “nobody can help” disadvantage already mentioned. In fact, people are disincentivised from helping, because their task has their name on it and your task has your name on it. Did issue #1348 miss the release train? Bob is such a drag on the team, at least Karen managed to ace her ticket. Maybe we should reevaluate who leads on the next project.

You’ll see other effects of one person per task. Code reviews fall into one of two categories: “LGTM” and “axe to grind”. Only the people who are really invested in making sure that nobody ever misses off a const keyword, or uses function() where => would suffice, will take the time to commit to code reviews. Everybody else will skim-read, look at the CI output to see if the tests pass, and get back to their own task with their own name on it as quickly as possible. This loses both the review benefit of code review, and the shared-understanding-of-the-code benefit too. Everyone only really understands the features they worked on individually, there just happens to be a big ball of those features in one repo.

Code quality suffers. Each individual is too busy chopping down trees to sharpen the saw, because there’s always a next task for each developer to do.

Everybody else has been under-resourced. We need one developer per task, because what I can see is features in a UI and the only people shovelling features are the devs. QA is a cost centre, so if we can get one QA (or at most, one per team) then let’s do that. Same with ops. Infosec, coaching, UX, and other nice-to-haves can be consultants as needed. Weird how our devs are ticking off tasks like billy-O, and nothing’s getting through to release!

The alternative to “one person per task” is not “one person per task and some change”. It’s “one objective per team”. Set the goal, and let people work out what to do about it and how everyone contributes. As they used to say, “give them the environment and support they need, and trust them to get the job done”.

March 24, 2021

March 21, 2021

A statement from our CEO and Founder, Bruc.

“Look, I”m fed up at people complaining about Facebruce allegedly “facilitating” genocide. Since we began, we’ve always been about connecting people–initially some nerds to chicks we rated as hot, but now it’s about connecting everybody. We’d like to teach the world to sing, in perfect harmony.

Unfortunately, not everyone wants to sing in perfect harmony. Some people, we are shocked to learn, aren’t actually very nice people. How were we at Facebruce to know what would happen when our algorithms repeatedly recommended members of The Hutu Machete Enthusiasts Club also join the Death To Tutsi Cockroaches group?

We’re not in the content policing business. There’s simply too much of it. And anyway, we’re just a platform. We already have thousands of servers running 24/7 to weed out pictures of nipples (women’s nipples, to be precise) so your Auntie Martha doesn’t clutch her pearls, because offending people in high ARPU markets leads to a drop in engagement.

So there was literally no way for us to know that the Death To Tutsi Cockroaches group was not simply a pest control company. I even went so far as to attempt to verify this, by walking around the HQ trying to find an African person to ask whether cockroaches are a problem, but there was no-one matching that description in the boardroom.

Facebruce is about building communities. We are very active in the GraphQHell community and the Reactionary community. In fact, only last week, we offered free afterhours use of a meeting room in our fifty storey gold-plated HQ to host a meeting of GraphQHell Engineers Against Killing Rohingyas, and even sponsored $100 of pizza for attendees. This shows that we’re taking real action and putting real resources into counteracting Hate Speech on the Facebruce platform.

So that’s cleared up then. Be sure to press “Like!” to demonstrate engagement.”

Next on Timeline: Why Covid is a hoax – evidence from the Protocols of the Elders of Zion!

March 19, 2021

March 18, 2021

March 16, 2021

March 12, 2021

March 08, 2021

March 07, 2021

The Vizzini Effect by Graham Lee

A bunch of the topics I wanted to discuss all turned out to have a common basis, so I’m going to write the post about the commonality using a couple of examples from the specific topics for illumination. Maybe I’ll come back to those topics in more depth later, each one is itself interesting and valuable.

The common thing is the Vizzini Effect, named after the Sicilian in the Princess Bride. In the movie/book, Vizzini often describes events as “inconceivable,” to which Inigo Montoya replies “you keep using that word. I do not think it means what you think it means”. The Vizzini Effect in software engineering (and undoubtedly in other fields of endeavour too, I doubt we’re special) is when the same thing happens: a word or phrase seems to adopt a different meaning such that two different people, or two different groups of people, can mean it to use different things without either seeming malicious or disingenuous. In the examples I’m going to explore here, those groups are separated by time rather than space. But unlike with Vizzini, it’s not that one person is using a word in a weird way, but that collectively software engineers seem to have decided it takes a different meaning.

Examples of Vizzini Phrases

Object-Oriented Programming

OOP is perhaps the ur-example here, and definitely the one with the most obvious dog-whistle. “I invented the term Object-Oriented Programming,” says Alan Kay, “and I can tell you I did not have C++ in mind”. To Alan and that early group of Smalltalk programmers at Xerox, ParcPlace, Tektronix etc., object-oriented programming was extreme late binding and decoupling through message sending. These days, it is often programming in any language that has a “class” keyword, or a straw man meaning any form of mutable state.


In the discussion do you think Agile/Scrum is beneficial for software delivery?, the first answer (at time of writing) says “The whole thing was designed to give non-technical people more power over the ones who spent a lifetime honing their craftmanship.” The question asks about the surprising rituals and the extra layers of bureaucracy. That’s the opposite impression than the one I have, where pre-existing software engineering methods tried to minimise or even automate away the programmer contributions. The lightweight methods, promoted by (among others) the agile alliance, sought to build projects around motivated individuals, giving the support they needed but leaving them alone to get the job done. The alliance members thought that the best architectures and designs were created by self-organising teams: a far cry from imposing methodologies to remove power from technical contributors.

Design Patterns

Design Patterns in software used to refer to the Christopher Alexander idea of identifying repeating problems in architecture and building a shared language that succinctly communicates understanding of the problem, solutions selected, and trade-offs in those solutions. These days it seems to mean any of the examples of design patterns in the Gang of Four book on early-OOP implementation patterns, and no others.

Free Software

The Free Software Foundation and the GNU project were created with the goal of extending desirable human rights and liberties to the world of computing. These days it seems to mean “open source, but said by a person who uses the word actually a lot”.

Open Source

The Open Source Initiative was created to generalise the Debian Free Software Guidelines out from the Debian project to general business rules for software, based on the prior successful Open Systems movement and the liberties identified in the Free Software Movement. These days it pretty much means making the components needed to build SaaS subscription products available at zero cost.

Software Engineering

In 1967, software engineering was a provocative term, meant to imply that the art of creating software would be somewhat improved if it had a socio-scientific basis. These days software engineering is two words anyone who gets or wants to get paid for programming uses on their CV/résumé.

What happened?

My impression is that three things changed, and that two of them are almost the same. The origins of all of these phrases are in particular times in history, made by particular people, talking in specific contexts. Time has changed, which has changed the context (or at least the relevance of the original context), the people who said the things have changed, and so many new people have entered the field that a majority of practitioners no longer know who the original people were, nor have experienced the context in which they spoke.

It’s entirely possible, for example, that the agile methodologies which were lightweight reactions to software engineering around 2000 are oppressively bureaucratic in 2021. We expect to be able to release software multiple times per day now, using analytics and telemetry to understand in real time how it’s being used. The agile folks wanted us to release up to every two weeks and to talk to someone before doing it, ugh!

Some people talk about an Agile-Indu$trial Complex, suggesting that there’s some shady cabal of consultants and certification bodies conspiring to make us all agile so they can profit from it. Again, maybe true. Others talk of companies who “talk the talk without walking the walk”: they got the consultant in, decided which parts of this whole Agile thing sounded nice or relevant, and adopted those things then trumpeted their “fully agile workflow” on their websitesfax banners.

And, of course, there’s the telephone game. Even those of us who heard about it from the horse’s mouth—maybe worked on an XP team, or read “Free Software, Free Society”—will have learned a slightly different thing than what the originators were trying to teach us, or thought they were teaching us. When telling others, we’ll have misremembered, and adapted, and extemporised. And so will the people who learned from us, and so on.

The telephone game is subject not only to slow evolution, but to a Byzantine Generals attack. If someone wants, for example, OOP to die so that their preferred paradigm get used instead, they can inject a false message into the call graph. This is where Vizzini meets Lewis Carroll’s Humpty-Dumpty: I keep using this word, and I do not think it means what you think it means.

Take into account the fact that most people who work in software now didn’t work in software five years ago, and that this was true five years ago and so on, and you realise that the vast majority of people will have learned about any “classic” idea in software from a telephone conversation.

What to do?

Well, the first question to answer is, does anything particularly need to be done? Maybe these are ideas that have had their time, and can just fizzle out. But evidently for all of the examples above enough people want the ideas to continue that they (well, we obviously) keep trying to dredge the original discussions out of the history books and put them back into contemporary discourse. To do this, they need recontextualising. For example, nobody cares that Richard Stallman couldn’t get a printer driver in the 1970s, but maybe they do care that there are things that they aren’t allowed to do with the smartphones they think they paid for. That’s how ideas of software freedom could be reintroduced.

Maybe the original phrase has become toxic and needs to be retired, without the original meaning being lost too. That is, whether you like it or not, the reason that “Open Source” was created as a term: to remove deliberate and accidental confusion over the word “freedom” in a business context, and to provide familiarity to people who had already adopted Open Systems ideas. It’s why Devops exists: to tell the stories of Agile again, but to those who didn’t listen the first time, or who listened and heard the wrong thing.

The telephone game can’t be avoided. You have to keep telling the stories if you want new people to hear them, and that means accepting alterations in their re-telling. And you need there to be more than one raconteur, even if they’re telling slightly different versions of the story. Don’t count messiahs, count prophets. Only don’t count prophets, count gospels. Only don’t count gospels, count churches. Only don’t count churches, count preachers.

You’re never going to get every programmer or software professional in the world to agree with your interpretation of some phrase. But you can use contextually-relevant stories to tell people things that might help them make better software, and you can follow up “I do not think that means what you think it means” with a conversation in which you both learn something. Maybe it’s your understanding that’s wrong?



I do a lot of writing, podcasting, presenting, and streaming about how to make software. Most of it has been free, still is free, and I don’t intend to change that. It’d be great if you are able and willing, for you to support that free work by becoming a patron. No obligation!

March 04, 2021

February 28, 2021

February 2021 by James Nutt (@zerosumjames)

Why isn’t my site serving assets via HTTP/2?

Status: unresolved

The front-end chapter of The Ruby on Rails Performance Apocrypha urges us to enable HTTP/2 in our Cloudfront distributions. It turns out HTTP/2 is enabled by default on Cloudfront distributions since around September 2007. Which is neat, we should be taking advantage of it already. So I check that it’s ticked in our distribution (it is) and then load some assets via Firefox. And it says in the header that they’re being served with HTTP/1.1. What the heck. Chrome seems to be happily fetching assets from our Cloudfront distribution over HTTP/2. Why isn’t Firefox?

So I check the CanIUse page on HTTP/2, and it suggests that many browsers (except prominently Safari) only support HTTP/2 when the server is capable of negotiating via ALPN.

It’s not that, since Chrome is working. And it turns out, so is Firefox, if I use an incognito window.

Not storing secrets in your ENV


Things I Read

February 22, 2021

From Idea to Fruition by Daniel Hollands (@limeblast)

With my review of the Ooznest WorkBee CNC being published in Hackspace Magazine last week, I figured this would be a good time to run though the general process I follow when designing for, and running jobs on, the CNC.

I have to admit, I’m not entirely sure where the idea for the eventual design came from, I just know I wanted something personal, which could act as a portfolio piece for Monumental Me.

It didn’t start life as a fully formed concept, rather a vague idea of two sets of footprints to represent Lucy and I.

I’m no artist, but I’m slowly getting better at pulling third party assets into (hopefully) cohesive designs. More often than not I find myself browsing VectorStock, which I’ve come to rely on as an indispensable resource, and a further source of inspiration.

This is where the idea of the cat prints going off on their own came from, as the asset already existed, and perfectly represented the head-strong nature of Apricat – so all I needed to do was import the asset and incorporate it into the design.

Affinity Designer is my tool of choice for producing the bulk of my designs. I’m barely scratching the surface of what it can do, but each time I boot it up, I learn something new.

For this particular design, I had an asset which featured something like 10 different pairs of feet, from which I chose the ones I liked the best, being sure to choose different shaped feet for the each of us, and then set about positioning each foot print individually.

The design gets exported from Affinity Designer as an SVG, which I import into Vectric VCarve. This powerful, yet really simple to use tool, makes it really easy to calculate the gcode toolpaths required by the CNC, all while providing a really accurate 3D representation of what the final design will look like.

It’s not a cheap piece of software, but it does a hell of a lot of heavy lifting, meaning that even a complete novice such as myself can produce excellent results.

Back in the real world, after applying a layer of vinyl, I’ll secure the stock I’m using to the spoilerboard on the CNC using some screws (I need a better camping method, but this will do for now), then will run the job via my workshop laptop.

Ear and eye protection are of paramount importance, as is my attention to the job as it runs, to ensure nothing goes wrong. On more than one occasion I’ve had a small miscalculation cause the job to go askew, forcing me to perform an emergency stop. Thankfully, these are becoming more and more rare, as I learn from my mistakes.

This is where the layer of vinyl applied in the previous step helps by providing a mask over any parts of the display we don’t want paint on.

A healthy coat of sanding sealer is applied to the surface to help prevent the paint bleeding into the grain, then I let lose with the spray paint, being sure to apply it from all angles to ensure the whole of the carved section is covered.

Once the paint is dry and the vinyl removed, and I do a light sanding. I don’t want to be too aggressive at this point or I’ll sand away some of the more intricate details of the carving.

This is followed by a coat of Danish Oil, which helps the natural beauty of the wood to show through, while providing a small amount of protection. You can apply up to three or four coats, but I typically leave it at one for anything which is designed for display purposes, rather than active use or handling.

No display is complete without the ability to hang it. I’ve tried using Command strips for mounting displays on the wall in the past, but the Danish oil stopped them from sticking very well, so I’ve started to use these sawtooth picture frame hangers.

Bang a nail in the wall, hang the display on it, step back, and enjoy your work.

I used to think that I was too anxious and angry. This led me to believe that I should work out how to get rid of it.

February 16, 2021

On Perfectionism by Ben Paddock (@_pads)

Perfectionism is the unrelenting pursuit of reaching the highest possible standards and then still left wanting more.

February 14, 2021

Cap in Hand by Graham Lee

You’re probably aware that between this blog, De Programmatica Ipsum, and various books, I write a lot about software engineering and software engineers.

You may know that I also present a podcast on software engineering topics, and co-host two live streams on Amiga programming and Objective-C programming.

I do all of this because I want to. I want to engage in conversations about software engineering; I want to help my colleagues and peers; I want to pass on my experience to others. Of course, this all takes rather a lot of time, and a not-insignificant amount of money. Mostly in hosting fees, but also a surprising chunk on library memberships, purchase of out-of-print materials on software engineering, and event attendance. More than my academic (i.e. not-for-profit) salary was designed to withstand. None of these projects is ad-supported, and that’s not about to change.

I’ve launched a Patreon page, where if you enjoy anything I write, say, or show, you can drop me a little bit of cash to say thanks. There’s no obligation: nothing I currently make freely available is going behind a paywall, and I’m not planning any “subscriber-only content” in the future. All I’m saying is if you’ve enjoyed what I’ve been producing, and having my voice in the software engineering fray, here’s another way in which you can say thank you.

February 05, 2021

Another day, another developer explaining that they don’t follow some popular practice. And their reason? Nothing more than because other people do the thing. “Best practices don’t exist,” they airily intone. “They’re really mediocre practices”.

In one sense, they’re correct. Best practices need to be evidence-based, and there’s precious little evidence in software engineering. In a regulated profession, you could avoid using accepted best practice, but if something went wrong and you ended up on the receiving end of a malpractice suit, you would lose.

So best practice as an argument in software engineering has two weaknesses: the first is that there’s no basis in evaluation of practice; and the second is that being a monetised hobby rather than a profession there’s no incentive to discover and adopt best practice anyway.

But those arguments mean that best practices are indistinguishable from alternative practices, not inherently worse. If a programmer discards a practice because they claim it’s considered best practice, they’re really just stamping their foot and shouting “I don’t wanna!”

They’re rejecting the remaining evidence in favour of the practice—that it’s survived scrutiny by a large cohort of their peers—in favour of making their monetised hobby look more like their headcanonical version of the hobby. “We are uncovering better ways of making software by doing it and by helping others to do it” be damned: I want to use this thing I read a substack post about yesterday!

Dig deeper, and you’ll find only platitudinous justification based on thought-terminating cliche: I’ve already covered “Reasoning about code”, and maybe some time I’ll cover “Right tool for the job”. This time, let’s look at “things won’t advance unless some of us try new ways of doing it”.

People tried new ways of making new steam engines all the time, during the industrial revolution. People tried new ways of making chimneys all the time, during the 15th and 16th centuries. A lot of factories and trains exploded, and a lot of buildings burnt down. If you live in a house with a chimney now, or you have ever taken a train, it’s significantly less likely to have self-immolated than at earlier times in history.

It’s not, for the most part, due to misunderstood lone geniuses rejecting what everybody else was doing, but a small amount of incremental development and a large amount of theoretical advance. It’s no coincidence that the field of thermodynamics advanced leaps and bounds during the steam age. Brad Cox makes this point about software too, in almost everything he wrote on the topic: you don’t get as much advance from random walks in the cottage industry as you do from standardisation, mass production, the division of labour, and interchangeable parts that can be evaluated on merit with reference to a strong theoretical underpinning.

Of course, the “reason about code” crowd try to stop this from happening, because if that advance happened then the code-reasoning would quickly disappear to be replaced with the problem-domain-reasoning that’s significantly harder and less of a hobby. Hence the sabotage of best practice: let’s put a stop to this before anybody realises it’s more than sufficient to the task at hand.

Alan Kay once referred to a LISP evaluator written in LISP as “the Maxwell’s Equations of software”. But what software needs before a James Clerk Maxwell are the Gibbs, Boltzmanns, Joules and Lavoisiers, the people who can stop us from blowing things up in production.

February 03, 2021

[objc retain] stream by Graham Lee

Starting next week: [objc retain]; in which Steven Baker and I live-code Objective-C on a modern free software platform. Wednesday, February 10th, 1900UTC. More info at

January 31, 2021

January 2021 by James Nutt (@zerosumjames)

What are default and bundled gems in Ruby?

With Ruby 3.0 the standard library is going to become default gems.

A Ruby installation has three parts. The standard library, default gems, and bundled gems.

The standard library is the core language and utilities. Default gems are gems that cannot be removed, but need not be required. Bundled gems are gems that come along with a Ruby installation, but must be explicitly required as a dependency and can be removed.

There is an ongoing effort to extract parts of the Ruby standard library to default gems. By keeping the standard library itself lean, you free its component parts from being unnecessarily tied to a larger development and release cycle, as well as making bits easier to remove or deprecate as time goes on.

Things I Read

In Progress

  • Continuing to read A Promised Land by Barack Obama. Maybe 70% of the way through this.

January 30, 2021

forty-five by Stuart Langridge (@sil)

It’s my birthday!

This year, in the midst of a coronavirus lockdown, it’s been something of a quiet one. I got lots of nice best wishes from a bunch of people, which is terribly pleasing, and I had a nice conversation with the family over zoom. Plus, a really good Chinese takeaway delivered as a surprise from my mum and dad, and I suspect that if there were a video of them signing up for a Deliveroo account to do so it would probably be in the running for the Best Comedy BAFTA award.

Also I spent some time afternoon doing the present from my daughter, which is the Enigmagram, an envelope of puzzles which unlock a secret message (which is how I discovered it was from my daughter). I like this sort of thing a lot; I’ve bought a couple of the Mysterious Package Company‘s experiences as presents and they’re great too. Must be a fun job to make these things; it’s like an ARG or something, which I’d also love to run at some point if I had loads of time.

I’ve just looked back at last year’s birthday post, and I should note that Gaby has excelled herself again with birthday card envelope drawing this year, but nothing will ever, ever exceed the amazing genius that is the bookshelf portal that her and Andy got me for Christmas. It is amazing. Go and watch the video immediately.

Time for bed. I have an electric blanket now, which I was mocked for, but hey; I’m allowed. It’s really cosy and warm. Shut up.

January 28, 2021

Another day, another post telling me to do something, or not do something, or adopt some technology, or not adopt some technology, or whatever it is that they want me to do, because it makes it easier to “reason about the code”.

It’s a scam.

More precisely, it’s a thought-terminating cliche. Ironic, as the phrase “reason about” is used as a highfalutin synonym for “think about”. The idea is that there’s nowhere to go from here. I want to do things one way, some random on medium dot com wants me to do things another way, their way makes it easier to reason about the code, therefore that’s the better approach.

It’s a scam.

Let’s start with the fact that people don’t think—sorry, reason—about things the same way. If we did, then there’d be little point to things like film review sites, code style guides, or democracy. We don’t know precisely what influences different people to think in different ways about different things, but we have some ideas. Some of the ideas just raise other questions: like if you say “it’s a cultural difference” then we have to ask “well, why is it normal for all of the people in that culture to think this way, and all of the people in this culture to think that way?”

This difference between modes of thought arises in computing. We know, for example, that you can basically use any programming language for basically any purpose, because back in the days when there were intellectual giants in computering they demonstrated that all of these languages are interchangeable. They did so before we’d designed the languages. So choice of programming language is arbitrary, unless motivated by external circumstances like which vendor your CTO plays squash with or whether you are by nature populist or contrarian.

Such differences arise elsewhere than in choice of language. Comprehension of paradigms, for example: the Smalltalk folks noticed it was easier to teach object-oriented programming to children than to professional programmers, because the professional programmers already had mental toolkits for comprehending programming that didn’t integrate with the object model. It’s easier for them to “reason about” imperative code than objects.

OK, so when someone says that something makes it easier to “reason about” the code, what they mean is that that person find it easier to think about code in the presence of this property. I mean, assuming they do, and are not disingenuously proposing a suggestion that you do something when they’ve run out of reasons you should do it but still think it’d be a good idea. But wait.

It’s a scam.

Code is a particular representation of, at best, yesterday’s understanding of the problem you’re trying to solve. “Reasoning about code” is by necessity accidental complexity: it’s reflecting on and trying to understand a historical solution of the problem as you once thought it was. That’s effort that could better be focussed on checking whether your understanding of the problem is indeed correct, or needs updating. Or on capturing a solution to an up-to-the-minute model of the problem in executable form.

This points to a need for code to be deletable way faster than it needs to be thought about.

Reasoning about code is a scam.

Today I was vaccinated for Covid.

It occurred to me that people might have a question or two about the process, what it’s like, and what happens, and I think that’s reasonable.

A roadsign in Birmingham reading 'Stay Home, Essential Travel Only, Covid'

Hang on, why did you get vaccinated? You’re not 70, you’re, what, thirty-six or something?
[bad Southern damsel accent] “Why, Mr Vorce in my Haird, I do declare, I’ve come over all a-flutter!” [bats eyelashes].
Nah, it’s ‘cos I’m considered clinically extremely vulnerable.
What’s vulnerable?
According to Google if you’re strong and vulnerable then it’s recommended to open with a no-Trump bid. That sounds like a good thing to me.
What’s it like?
It’s like the flu jab.
Like what?
Fair enough. If you’re reading this to learn about vaccination around the time that I’m posting it, January 2021, then maybe you’re someone like me who has had a bunch of injections and vaccinations, such as the flu jab every year. But if you’re reading it later, you may not be, and the idea of being vaccinated against something for the first time since school might be vaguely worrying to you because it’s an unknown sort of experience. This is fair. So, here’s the short form of what happens: you walk into a room and roll up your sleeve, they give you an injection into your arm that you hardly even feel, you sit in the foyer for 15 minutes, and then you go home. It’s like buying a kebab, except with fewer sheep eyebrows.
That’s a pretty short form. Some more detail than that would be nice.
True. OK, well, the first tip I can give you is: don’t put your big heavy coat and a scarf on in the mistaken belief that it must be cold because it’s January and then walk for 45 minutes to get to the hospital, because you’ll be boiling hot when you get there and aggravated.
Just the coat was the problem?
Well, it also turns out that if you stay indoors “shielding” for a year and then go on a long walk, you may discover that your formerly-Olympic-level of fitness has decayed somewhat. Might have to do a sit-up or two.
Or two hundred and two maybe, fatso.
Shut up, Imaginary Voice in my Head.
What next?
I was told to go to the hospital for the vaccination. Other people may be told to go to their GP’s office instead; it seems to vary quite a lot depending on where you live and what’s most accessible to you, and it’s possible that I am very lucky that I was sent to City Hospital, somewhere within walking distance. I’ve heard of others being sent to hospitals twenty miles away, which would have been a disaster for me because I don’t have a car. So, either Sandwell and West Birmingham NHS Trust are particularly good, or others are particularly bad, or I happened to roll double-sixes this time, not sure which.
How were you told?
I got a phone call, yesterday, from my GP. They asked when I was available, and suggested 12.55pm today, less than twenty-four hours later; I said I was available; that was it.
And at the hospital?
Finding the specific building was annoying. SWBH: put up some signs or something, will you? I mean, for goodness sake, I’ve been to the hospital about twenty times and I still had no idea where it was.
No more griping!
I haven’t got anything else to gripe about. It was a marvellously smooth process, once I found the building. This is what happened:
I walked up to the door at 12.45 for my appointment at 12.55. A masked woman asked for my name and appointment time; I gave them; she checked on a list and directed me inside, where I was met by a masked man. He gave me some hand sanitiser (I like hospital hand sanitiser. Seems better than the stuff I have) and directed me to a sort of booth thing, behind which was sat another man.
The booth seemed like a quite temporary thing; a rectangular box, like a ticket booth but probably made of thick cardboard, and with a transparent plastic screen dividing me from him; one of two or three of them in a line, I think. He asked me to confirm my details — name, appointment time, address — and then asked four or five very basic questions such as “do you have the symptoms of coronavirus?” Then he handed me two leaflets and directed me to two women, one of whom led me directly to an examination room in which were a man and a woman.
The man confirmed my details again, and asked if I’d been injected with anything else in the previous month; the woman asked me to roll up my sleeve (I ended up half taking my shirt off because it’s long-sleeved and rolling it all the way up to the shoulder is hard), and then gave me the injection in the upper part of my arm. Took about two seconds, and I hardly felt anything.
Wait, that’s it?
Yup. That whole process, from walking up to the door to being done, took maybe ten minutes maximum.
And then you left?
Not quite: they ask you to sit in the waiting room for fifteen minutes before leaving, just in case you have some sort of a reaction to the injection. People who have had the flu jab will recognise that they do the same thing there, too. This gave me a chance to read the two leaflets, both of which were fairly boring but important descriptions of what the vaccine is, what vaccines are in general, and any risks.
They also stuck a big label on my shirt showing when my fifteen minutes was up, which I think is a really good idea — they don’t do this for the flu jab, in my experience, but it’s a good idea for a vaccination where you have to put basically everybody through the process. I also got a little card which I’m meant to show at the second vaccination, which is now safely in my wallet and will probably still be there in twenty years unless they take it off me, along with the receipts for everything I’ve bought since about 2007.
So then you left?
Yes. Another of the staff confirmed I’d been there for long enough, asked if I was feeling OK (which I was), and asked if I had any questions. I didn’t, but I did ask if I had to go back to work now or if I could just hang out there for a while longer — I’m sure she’d heard variations on that eighty times already that day, but she laughed anyway, and I like trying to chat to hospital staff as human beings. God alone knows what a dreadful year they’ve had; they deserve courtesy and smiles from me at least.
Were they smiling?
Indeed they were. Even though you can’t tell because of the masks. Everyone I spoke to and everyone there was cheery, upbeat, courteous, and competent, without being dismissive or slick. I can’t speak to other NHS trusts, or other hospitals, or even other days in my hospital, but every time I’ve been there the staff have been helpful and nice and ready to share a joke or a chat or to answer a question, and this time was no exception.
What, you didn’t ask any questions at all? What about whether you’re being microchipped by Bill Gates? And the risks of 5G with the vaccine? And…
No, of course I didn’t. Vaccination is safe, and it’s one of the greatest inventions humanity has ever come up with. The NHS guidance on vaccinations is a nice summary here. If I’d walked in the door and someone in scrubs had said to me, “the best way to make web pages is to require 500KB of JavaScript to load before you put any text on screen”, or “Ubuntu has secret motives to undermine free software”, I would have said to them “no, you’re wrong about that” and argued. But me and the medical profession have an agreement: they don’t tell me how to build software, and I don’t tell them how to save millions of lives. This agreement is working out fine for both of us so far.
What’s that about risks and side-effects, though?
Apparently I may feel tired, or an ache in my arm, for the next couple of days. I’ll keep an eye out. They said that if it aches a bit, taking paracetamol is OK, but I am not a medical professional and you should ask the question yourself when you get vaccinated.
Which vaccine did you have?
Remember the two leaflets? One of them is a generic NHS leaflet about Covid vaccines; the other is specific to the one I had and is from BioNTech, which means it’s the Pfizer one. The little wallet card also says it’s Pfizer. I didn’t ask the staff because I could work it out myself and I’m not going to change anything based on their answer anyway; it’d be pure curiosity on my part. Also, see previous point about how I don’t tell them how to do their jobs. I assume that which vaccine I got was decided at some sort of higher-up area level rather than tailored for me specifically, but hey, maybe I got a specific one chosen for me. Dunno; that’s for medical people to know about and for me to follow.
What now?
The vaccine doesn’t properly kick in for a few weeks, plus you need the second injection to be properly vaccinated. That should be 9-12 weeks from now, I’m told, so I’ll be staying inside just as I was before. Might empty all the litter out of my wallet, too.
But I have more questions!
Well, I’m on twitter, if they’re sensible ones. Conspiracy stuff will just get you blocked and possibly reported with no interaction, so don’t do that. But this has been a delightfully simple process, made very easy by a bunch of people in the NHS who deserve more than having people clap a bit for them and then ignore the problems. So if I can help by answering a question or two to alleviate the load, I’m happy to do that. And thank you to them.
Are you really thirty-six?
Ha! As if. It is my birthday on Saturday, though.
You know that thing you said about bridge bids is nonsense, right?
Ah, a correction: also do not ask me questions about bridge. Please.

January 25, 2021

Ubiquitous computing by Graham Lee

I, along with many others, have written about the influence of Xerox PARC on Apple. The NeXT workstation was a great example of getting an approximation to the Smalltalk concept out using off-the-shelf parts, and Jobs often presaged iCloud with his discussion of NetInfo, NFS, and even the magneto-optical drive. He’d clearly been paying attention to PARC’s Ubiquitous Computing model. And of course the iPad with Siri is what you get if you marry the concept of the DynaBook with a desire to control the entire widget, not ceding that control to some sap who’s only claim to fame is that they bought the thing.

Sorry, they licensed the thing.

There are some good signs that Apple are still following the ubicomp playbook, and that’s encouraging because it will make a lot of their products better integrated, and more useful. Particularly, the Apple Watch is clearly the most “me” of any of my things (it’s strapped to my arm, while everything else is potentially on a desk in a different room, stuck to my wall, or in my pocket or bag) so it makes sense that that’s the thing I identify with to everything else. Unlocking a Mac with my watch is great, and using my watch to tell my TV that I’m the one plugging away at a fitness workout is similarly helpful.

To continue along this route, the bigger screen devices (the “boards”, “pads”, and “tabs” of ubicomp; the TVs, Macs, iPads, and iPhones of Apple’s parlance) need to give up their identities as “mine”. This is tricky for the iPhone, because it’s got an attachment to a phone number and billing account that is certainly someone’s, but in general the idea should be that my watch tells a nearby screen that it’s me using it, and that it should have access to my documents and storage. And, by extension, not to somebody else’s.

A scene. A company is giving a presentation, with a small number of people in the room and more dialled in over FaceTime (work with me, here). It’s time for the CTO to present the architecture, so she uses the Keynote app on her watch to request control of the Apple TV on the wall. It offers a list of her presentations in iCloud, she picks the relevant one by scrolling the digital crown, and now has a slide remote on her wrist, and her slides on the screen.

This works well if the Apple TV isn’t “logged in” to an iCloud account or Apple ID, but instead “borrows” access from the watch. Because the watch is on my wrist, so it’s the thing that is most definably “mine”, unlike the Apple TV and the FaceTime call which are “my employer’s”.

January 20, 2021

LIVEstep is a GNUstep desktop on a FreeBSD live CD, and it comes with the GNUstep developer tools including ProjectCenter. This video is a “Hello, World” walkthrough using ProjectCenter on LIVEstep. PC is much more influenced by the NeXT Project Builder than by Xcode, so it might look a little weird to younger eyes.

January 18, 2021

If I could string a thread through my childhood, the pins that hold the thread in place would be all the times I hit my head.

Me and my best friend (at the time) used to play a game called Dizzy Egg. It was a simple game. The object was to spin around as many times as we could and then try not to fall over. I usually fell over, and this usually meant hitting my head on the unforgiving concrete.

In the same playground, I ran—for no particular reason—head first into the white painted wall of one of the school buildings. Luckily, it stayed white.

I was part of a weekend football club. Football Fun. A better name for it might have been “Football Keeps Hitting Me In The Face.” I’m not sure what it was about that football or my face, but the two were inseparable. You couldn’t keep them apart.

I remember one final and dramatic incident. On running through a metal gate, the gate swung closed and tried to run through me. One minute we were running and chasing and laughing. The next I was on the floor, bleeding a lot and saying some words that weren’t suitable for the playground.

That one needed a trip to the hospital and I still have the scar.

January 14, 2021

Here’s what I’ve been working on (with others, of course) since February.

Back to Top