Last updated: October 16, 2021 04:22 AM (All times are UTC.)

October 14, 2021

October 13, 2021

October 08, 2021

Reading List 282 by Bruce Lawson (@brucel)

I only have 17 years of experience, but every point on this list accords with my experience. I’ve made my own attempt to catalogue things software developers should know (that are not writing code), but this is a succinct and great summary.

October 07, 2021

Grandma by Stuart Langridge (@sil)

A couple of weeks ago, my grandma died.

This was not wholly unexpected, and at the same time it was completely unexpected. I should probably explain that.

She was ninety, which is a fair age for anyone, but her mother (my great-grandmother) lived to be even older. What’s different is that nobody knew she was ninety, other than us. Most of her friends were in their mid-seventies. Now, you might be thinking, LOL, old, but this is like you in your mid-forties hanging out with someone who’s 30, or you in your late twenties hanging out with someone who’s 13, or you at eighteen hanging out with someone who’s still learning what letters are. Gaps get narrower as we get older, but the thing that most surprised me was that all her friends were themselves surprised at the age she was. She can’t have been that age, they said when we told them, and buried in there is a subtle compliment: she was like us, and we’re so much younger, and when we’re that much older we won’t be like her.

No. No, you won’t be like my grandma.

I don’t want to talk much about the last few weeks. We, my mum and me, we flew to Ireland in the middle of the night, we sorted out her house and her garden and her affairs and her funeral and her friends and her family, and we came home. All I want to say about it is that, and all I want to say about her is probably best said in the eulogy I wrote and spoke for her death, and I don’t want to say it again.

But (and this is where people in my family should tune out) I thought I’d talk about the website I made for her. Because of course I made a website. You know how some people throw themselves into work to dull the pain when something terrible happens to the people they love? I’m assuming that if you were a metalworker in 1950 and you wanted to handle your grief that a bunch of people got a bunch of metal stuff that you wouldn’t ordinarily have made. Well, I am no metalworker; I build the open web, and I perform, on conference stages or for public perception. So I made a website for my grandma; something that will maybe live on beyond her and maybe say what we thought about her.

Firstly I should say: it’s at kryogenix.org/nell because her name was Nell and I made it. But neither of those things are really true. Her name was Ellen, and what I did was write down what we all said and what we all did to say goodbye. I wanted to capture the words we said while they were still fresh in my memory, but more while how I felt was fresh in my memory. Because in time the cuts will become barely noticeable scars and I’ll be able to think of her not being here without stopping myself crying, and I don’t want to forget. I don’t want to lose it amongst memories of laughter and houses and lights. So I wrote down what we all said right now while I can still feel the hurt of it like fingernails, so maybe I won’t let it fade away.

I want to write some things about the web, but that’s not for this post. This post is to say: goodbye, Grandma.

Goodbye, Grandma. I tried to make a thing that would make people think of you when they looked at it. I wanted people to think of memories of you when they read it. So I made a thing of memories of you, and I spoke about memories of you, and maybe people who knew you will remember you and people who didn’t know you will learn about you from what we all said.

Goodbye, Grandma.

kryogenix.org/nell

The “return a command” trick

This is a nice trick, but we need a phrase for that thing where you implement extreme late binding of functions by invoking an active function that selects the function you want based on its name. I can imagine the pattern catching on.

September 30, 2021

Set Safari free! by Bruce Lawson (@brucel)

As you may know, every browser on iOS is actually just a branded re-skin of WebKit, the engine that Safari uses, because Apple won’t allow other engines on iOS.

Fred from Scooby Doo with a masked figure and text 'Firefox on iPhone'. Fred removes the mask to reveal the villain headed 'Apple's Safari'. Then the same with Edge on iPhone and Chrome on iPhone.

Supporters of the Apple Browser Ban tend to give one of three reasons (listed here from most ridiculous to most credible):

The web shouldn’t be “app-like”, it’s for documents only

Whatever. (See A Brief History of Web Apps for more on why this is nonsense.)

Privacy and security are protected by not allowing non-Apple code on devices

This doesn’t really make sense when non-Apple apps are allowed on iOS, which can leak data so valuable that Amazon and eBay will pay you to use their apps rather than web. Apple’s most recent zero-day vulnerability was exploited along with a flaw in WebKit, and so left all users exposed because users of other “browsers” are forced to use WebKit. Stuart Langridge has a great post going deeper into Browser choice on Apple’s iOS: privacy and security aspects.

Allowing other rendering engines leads to Chromium taking over the world

This one kind of makes sense. After all, Opera abandoned its Presto engine and Microsoft abandoned Trident, and both went to Chromium. Firefox risks sliding into irrelevance due to inept lack of leadership. If Apple were forced to allow Chrome onto iOS, then domination would be complete!

The interesting predicate of this argument is that Apple intend to keep Safari as the sad, buggy app that they’ve allowed it to wither to, because it has no competition. I emphatically do not want Chromium to win. Quite the opposite: I want Apple to allow the WebKit team to raise its game so there is an *excellent* competitor to Chromium.

WebKit is available on Windows, Linux and more. Safari was once available on Windows, but Apple silently withdrew it. SVP of software Eddy Cue, who reports directly to Tim Cook, wrote in 2013

The reason we lost Safari on Windows is the same reason we are losing Safari on Mac. We didn’t innovate or enhance Safari….We had an amazing start and then stopped innovating… Look at Chrome. They put out releases at least every month while we basically do it once a year.

There is browser choice on MacOS, and 63% of MacOS users remain with Safari (24% use Chrome, 5.6% use Firefox). As everyone who works on browsers knows, a capable browser made by the Operating System’s manufacturer and pre-installed greatly deters users from seeking and installing another. There is no reason to believe it would be different on iOS. (Internet Explorer on Windows isn’t a counter-example; there were much better alternatives, long before Edge came along.)

But let’s set out aspirations higher. Imagine a fantastic Safari on iOS, Mac, Windows and Linux, giving Chrome a run for its money. If anyone can take on Google, Apple can. It has talented WebKit engineers, excellent Standards experts, a colossal marketing budget, and great brand recognition.

If Apple allowed Safari to actually compete, it would be better for web developers, businesses, consumers, and for the health of the web. Come on, Apple, set Safari free!

(You could also read my Briefing to the UK Competition and Markets Authority on Apple’s iOS browser monopoly and Progressive Web Apps.)

The biggest missing feature in the manifesto for agile software development and the principles behind it is anyone other than the makers and their customer. We get autonomous, self-organising delivery teams but without the sense of responsibility to a broader society one would expect from autonomous professional agents.

Therefore it’s no surprise to find developers working to turn their colleagues into a below-minimum-wage precariat; to rig elections at scale; or to implement family separation policies or travel bans on religious minorities. A principled agile software developer only needs to ask “how can I deliver this, early and continuously, to the customer?” and “how can we adjust our behaviour to become more effective at this?”: they do not need to ask “is this a good idea?” or “should this be delivered at all?”

Principle Zero ought to read something like this.

We build software that transforms the workplace, leisure, interpersonal interaction, and society at large: we do so in consultation with as broad a representation of those interests as possible and ensure that our software is beneficial to those whose lives are being transformed. Within that context, we follow these principles:

September 24, 2021

I know that there are bigger problems to discuss about Apple’s approach to business and partnerships at the mo, but their handling of security researchers seems particularly cynical and hypocritical. See, for example, this post about four reported iPhone 0days that went ignored and the nine other cases linked in that article.

Apple advertise themselves as the privacy company. By this, they really mean that their products are designed to share as much of your data with Apple as they are comfortable with, and that beyond that you should probably assume that nobody else is involved. But their security development lifecycle tells another story.

“Wait, did you just pivot from talking about privacy to security?” No! You can have security without privacy: this blog has that, on a first glance. All of the posts and pages are public, anybody can read them, but I want to make sure that what you read is actually what I wrote (the integrity of the posts) and that nothing stops you from reading it when you want (the availability). On a closer examination, I also care that there are things you don’t have access to: any of the account passwords, configuration settings, draft posts, etc. So in fact the blog has privacy requirements too, and those are handled in security by considering and protecting the confidentiality of those private assets. You can have security without privacy, but not privacy without security.

Something, I’m not sure what from the outside, is wrong with the security development lifecycle at Apple. As a privacy-focused company they should also be a security-focused company, but they evidently never had the same “trustworthy computing” moment that Microsoft did. I’m not going to do any kind of deep dive into CVE counts here, just provide the following high-level support for the case that Apple is, at best, doing no better than anybody else in the industry at this.

Meanwhile, they fail to acknowledge external contributors to their product security, do not pay out agreed bounties, and sue security researchers or ban them from their store. Apple say that the bounty program has doubled in 2019-2020 and continues to grow. You could say that maybe they aren’t doing any better, but they certainly aren’t doing any worse. Every new product announcement, senior managers at Apple up to their CEO tell everyone how great they are at privacy. Their intent is that people believe they are doing the best at this, when they are around the middle of the pack. This is disingenuous.

A bug bounty program is a security process of last resort. You didn’t design, implement, or fix flaws out of your product before it got to customers and attackers: and that happens, that’s fine, but these escapee threats that are realised as vulnerabilities should be a small fraction of the total possible problems, and the lower severity ones at that. You also didn’t detect it yourself once the customers and attackers had access to the product: that also happens and is fine, but again the vulnerabilities that escape your detection should be the lowest down the stack. Once someone else is discovering your vulnerabilities, the correct thing to do is to say thank you for bringing this to our attention before exploiting it yourself, here is compensation for the time and work you put into making our product better.

Apple is not doing this. As seen from the various linked stories above, they are leaving security researchers with a bitter taste and a questioning feeling over whether they would want to work with Apple again, but not doing the heavy lifting to ensure their SDLC catches the highest-severity problems on campus, before or after release. I don’t know what is at fault here, but I expect it’s systemic rather than individual leader/department/activity. The product security folks at Apple are good at their jobs, the software engineers are good at their jobs…and yet here we are.

I suspect a certain amount of large-company effect is at play. “As Tim told you, our products are best in class for privacy,” says anonymous and fictional somewhat high up marketing person, “and if you had any specific complaint I couldn’t hear it over all the high-volume stock cash register sound effects we play in the board room to represent our success in the marketplace.”

September 23, 2021

September 20, 2021

September 19, 2021

I was having a think about this short history of Objective-C, and it occurred to me that perhaps I had been thinking about ObjC wrong. Now, I realise that by thinking about ObjC at all I mark myself out as a bit of an oddball, but I do it a lot. I co-host the [objc retain]; stream with Steven Baker, discussing cross-platform free software Objective-C every week. Hell of a time to realise I’ve been doing it wrong.

My current thinking is that the idea of ObjC is not to write “apps” in ObjC, or even in successor languages (sorry, fans of successor languages). Summed up in that history are Brad Cox’s views which you can read in more length in his books. I’ve at least tangentially covered each book here: Object-Oriented Programming: an Evolutionary Approach and Superdistribution: Objects as Property on the Electronic Frontier. In these he talks about Object-Oriented Programming as the “software industrial revolution”, in which the each-one-is-bespoke way of writing software from artisinally-selected ones and lightly-sparkling zeroes is replaced with a catalogue of re-usable parts, called Software ICs (integrated circuits). As an integrator, I might take the “window” IC, the “button” IC, the “text field” IC, and a “data store” IC and make a board for entering expenses.

So far, so npm. The key bit is the next bit. As a computer owner, you might take that board and integrate it into your computer so that you can do your home finances, or so that you can submit your business expense claims, or so that your characters in The Sims can claim for their CPU time, or all three of those things. The key is that this isn’t some app developer, this is the person whose computer it is.

From that perspective, Objective-C is an intermediary tool, and not a particularly important or long-lasting one. Its job is to turn legacy code into objects so that it can be accessed by people using their computers by sticking software together using objects (hello NSFileManager). To the extent it has an ongoing job, that is to turn algorithms into objects, for the same reason (but the algorithms have been made out of not-objects, because All Hail the Perform Ant).

You can make your software on your computer by glueing objects together, whether they’re made of ObjC (a common and important case), Eiffel (an uncommon and important case), Smalltalk (ditto) or whatever. Objective-C is the shiny surface we’re missing over the tar pit. It is the gear system on the bicycle for the mind; the tool that frees computer users from the tyranny of the app vendor and the app store.

I apologise for taking this long to work that out.

September 17, 2021

September 16, 2021

(Last Updated on )

I watched this week’s Apple Event for a while, but there was nothing that interested me; I have a mobile phone which works fine for me, I don’t need a watch and I can’t afford a new computer.

But here’s one Apple event speech I genuinely found really energising. In 2007, Steve Jobs made a bold announcement at Apple’s developer conference, that I still find inspiring today:

Transcript

Now, what about developers? We have been trying to come up with a solution to expand the capabilities of iPhone by allowing developers to write great apps for it, and yet keep the iPhone reliable and secure. And we’ve come up with a very sweet solution. Let me tell you about it.

So, we’ve got an innovative new way to create applications for mobile devices, really innovative, and it’s all based on the fact that iPhone has the full Safari inside. The full Safari engine is inside of iPhone and it gives us tremendous capability, more than there’s ever been in a mobile device to this date, and so you can write amazing Web 2.0 and Ajax apps that look exactly and behave exactly like apps on the iPhone!

And these apps can integrate perfectly with iPhone services: they can make a call, they can send an email, they can look up a location on Google Maps. After you write them, you have instant distribution. You don’t have to worry about distribution: just put them on your internet server. And they’re really easy to update: just change the code on your own server, rather than having to go through this really complex update process. They’re secured with the same kind of security you’d use for transactions with Amazon, or a bank, and they run securely on the iPhone so they don’t compromise its reliability or security.

And guess what: there’s no SDK! You’ve got everything you need, if you know how to write apps using the most modern web standards, to write amazing apps for the iPhone today. You can go live on June 29.

On open Operating Systems, Progressive Web Apps live up to this promise; truly cross-platform code that can be responsive to any form factor, using a mature technology with great accessibility (assuming a competent developer), that is secure and sandboxed, that requires no gatekeepers, developer licenses or expensive IDEs. They’ll work on Android, Windows, ChromeOS and Mac.

But 14 years after Jobs had this bold vision for the open web, iOS hasn’t caught up. Apple has imposed a browser ban on iOS. Yes, there are “browsers” called Chrome, Edge, Firefox that can be downloaded from the App Store for iOS–but they only share branding and UI features with their fully-fledged counterparts on open Operating Systems. On iOS, they are all just differently-badged skins for the buggy, hamstrung version of WebKit that Apple ships and occasionally patches for security (often waiting long after WebKit has been fixed before pushing it to consumers).

Apple knows Safari is terrible. SVP of software Eddy Cue, who reports directly to Tim Cook, wrote in 2013

The reason we lost Safari on Windows is the same reason we are losing Safari on Mac. We didn’t innovate or enhance Safari….We had an amazing start and then stopped innovating… Look at Chrome. They put out releases at least every month while we basically do it once a year.

Forcing other iOS “browsers” to skin Safari’s engine rather than use their own more capable engines is a deliberate policy decision. Apple’s App Store guidelines state

Apps that browse the web must use the appropriate WebKit framework and WebKit Javascript.

Job’s 2007 speech felt like a turning point: a successful, future-facing company really betting on the open web. These days, Apple sells you hardware that they claim will “express your individuality” by choosing one of two brand new colours. But, for the web, choose any colour you want, as long as it’s webkit-black.

Some of us are trying to change this. Earlier this month I was part of a small group invited to brief the UK regulator, the Competition and Marketing Authority, as part of its investigation into

Apple’s conduct in relation to the distribution of apps on iOS and iPadOS devices in the UK, in particular, the terms and conditions governing app developers’ access to Apple’s App Store.

You can watch the video of my presentation, and see Stuart Langridge’s slides.

September 15, 2021

September 13, 2021

September 09, 2021

Shop cop by Andy Wootton (@WooTube)

Words interest me. As I’ve been getting interested in green woodworking and the appropriate tools, I’ve become more aware of the US term “shop”. A few days ago, in a discussion of why America is lagging behind on lower level tech education, someone suggested schools needed to reintroduce ‘shop’, to get kids interested in designing and making things. I think young Brits would know this as ‘Design Technology, Resistive Materials’. I did ‘Woodwork’ and ‘Metalwork’.

America takes its cars ‘to the shop’ because their ‘garages’ are being used as ‘car parks’. Obviously, it means ‘workshop’.  A shop where you buy work, a service instead of a product, as you walk down main street. Not at all like a job-shop in Britain where we try to sell spare labour to employers, or shop for jobs, if you prefer.

I came across a meaning from the world of share trading. A stock may be ‘shopped’ – actively sold. I think we’re finally getting to the meaning. A shop is a place (real or virtual) where goods, services and labour are traded, in either direction. A market stall. I almost wrote “store” but stores are where you keep things before selling our when they haven’t sold.

I was slightly side-tracked by coppers, copping criminals and taking them to the cop shop, trading their liberty for their crimes. My government assures me that if I haven’t committed a crime I won’t cop it. I wish I believed them. That’s the cost of lying in a market that depends on trust and information. Their share price is down.

September 06, 2021

(Last Updated on )

On 4 March this year, the UK Competition and Markets Authority announced that it is

investigating Apple’s conduct in relation to the distribution of apps on iOS and iPadOS devices in the UK, in particular, the terms and conditions governing app developers’ access to Apple’s App Store.

Submissions from the public were invited, so I replied and was invited, with “Clark Kent” (a UK iOS Web-Apps developer who wishes to be anonymous) and Stuart Langridge to brief the CMA in more detail. We were asked to attend for 2 hours, so spoke for one hour, and then took questions.

CMA are happy for our presentations to be published (they’re eager to be seen to be engaging with developers) but asked that the questions be kept private. Suffice it to say that the questioning went over the allocated time, and showed a level of technical understanding of the issues that surpasses many engineers. (The lawyers and economists knew, for example, that iOS browsers are all compelled to use WebKit under the hood, so Chrome/ Firefox/ Edge on iOS share only a name and a UI with their fully-fledged real versions on all other Operating Systems).

Clark’s presentation

Clark talked of how Steve Jobs’ 2007 announcement of apps for iPhone inspired him to use his webdev skills to make iOS web-apps.

you can write amazing Web 2.0 and Ajax apps that look exactly and behave exactly like apps on the iPhone. And these apps can integrate perfectly with iPhone services … And guess what? There’s no SDK that you need! You’ve got everything you need if you know how to write apps using the most modern web standards to write amazing apps for the iPhone today.

Unfortunately, Safari’s bugs made development much more expensive by breaking Indexed DB; breaking scrolling etc. Also, Apple refuses to support Web Bluetooth, so his customers had to purchase expensive iOS-compatible printers for printing receipts, at 11 times the cost of Bluetooth printers. iOS apps can integrate with Bluetooth, however.

He can’t ask his customers to jettison their iPads and buy Android tablets. Neither can he ask them to use an alternative browser, because there isn’t any meaningful choice on iOS. Clark simply wants to be able to make “amazing Web 2.0 and Ajax apps that look exactly and behave exactly like apps on the iPhone”, as Jobs promised. He summed up:

All these business costs inevitably get passed onto consumers. Or worse applications simply never get built, due to the increased cost.

From the perspective of software developers there is no browser competition on iOS.

Apple has banned all the other browser engines and Safari’s bugs and missing features means that web-apps struggle to compete with the AppStore. If third party browsers were allowed, with full access to all the APIs that Apple gives Safari, this would provide Apple the incentive to keep pace with the other browsers and give consumers a means of voting with their feet by moving to a competing browser if they fail to do so.

This would then lead to a universal open app development and distribution platform which would substantially reduce development and maintenance costs for a wide range of business and consumer applications. The open web offers an alternative future where the control from corporate intermediaries along with their fees are replaced with consumer choice and the freedom to easily shift from one platform to another. Instead of having to write multiple separate applications for every device, it allows developers to build their application once and have it work on all consumer devices be it desktop, laptop, tablet or phone.

Write once, deploy everywhere, no gatekeepers, no fees. just as Steve Jobs originally promised.

Stuart’s presentation

Stuart examined whether Apple’s assertion that lack of real browser diversity is actually better for consumers’ privacy and security.

My presentation

Here’s my captioned video. Below is the marked-up transcript with links. Note: this submission is my personal opinion and does not represent the opinion of any of my past or present clients or employers.

Transcript and Links

Hi, I’m Bruce Lawson, and I’m here to talk about Progressive Web Apps and iOS. I’ve been mucking around on the web since about 2002, mostly as a consultant working on expanding people’s websites to be truly worldwide, so they’re performant and work with constrained devices and networks in emerging markets. I’m also involved with making websites that are accessible for people with disabilities. And I was on the committee with the British Standards Institution writing BS8878, the British standard for commissioning accessible websites that last year got a rolled into the ISO. And before that, I was deputy for the chief technical officer at Opera software that makes a popular web browser used by about 250 million people worldwide, predominantly in emerging markets.

Most of my customers are in the UK, USA, or Israel, and they’re largely focused on UK and US markets. In the month of June this year, stats counter showed that 30% of all US web traffic was on iOS, 28% windows, 22% Android. And to drill that into just mobile browsers, 57% were iOS and 42% were Android.

In the UK, 30% of people use Windows to browse the web, 27% iOS, 25% Android. And that jells pretty well with stats from the GDS, the government digital services. Their head of front-end, Matt Hobbs, every month tweets some stats. He said that in June, iOS accounted for 44% of traffic, Android 28%. (Although he does note that because you have to opt in to be tracked, it tends to over-represent the mobile browsers). But we can see that in the UK, 50% of traffic is on iOS and 49% is Android.

So, we obviously have to be concerned with iOS as well as Android, Windows, et cetera, and all the different browsers that are available on every non-iOS operating system. And so, I always recommend to my customers that they use the web. After all, that’s what it’s for.

The principle of universality allows the web to work no matter what hardware, software, network connection or language you use, and to handle information of all types and qualities. This principle guides Web technology design

And that was said by Sir Tim Berners-Lee, and his parents were both from Birmingham, as am I. So he’s practically a relative. And with my work in web standards, those principles still hold true for the web standards we make today. So, I wanted to use the web and I wanted to use what’s called a Progressive Web App, which is basically a website plus. It’s a website and it’s web pages, each of which has a URL. It’s built with web technology, but in modern browsers, conforming browsers, it can look and feel more app-like.

In fact, the UK government itself recommends these. Our government writes,

advances in browser technology have made it possible to do things on the mobile web that you could previously only do with native apps. Services that take advantage of these modern browser enhancements are often called mobile web apps or Progressive Web Apps. The user experience of PWAs and a handset is almost identical to a native app, but unlike native apps, PWAs have a single code base that works on the web, like any normal website. A benefit of PWAs over native apps for the government is that there is no need to maintain lots of different operating system versions of the same app. This makes them relatively inexpensive to develop compared to native apps.

And just like government, business likes to reduce costs, and therefore can reduce costs to the end user. Our government goes on to say,

users can choose to install PWAs on any device that supports technology. If a user chooses to install a PWA, they typically take up less space than the equivalent native app.

And this is very important in emerging markets where lower priced phones have lower specifications and less storage space. The government continues,

PWAs cost less than native apps since they need to be only developed once for different platforms. They are also forward compatible with future upgrades to operating systems

And then it lists browsers that do support PWAs, and it lists Apple, Safari and iOS, but this is only partially true. Apple themselves make this claim in their February 2021 submission to the Australian investigation into monopolies. Apple said of PWAs,

Web browsers are used not only as a distribution portal, but also as platforms themselves, hosting progressive web applications that eliminate the need to download a developer’s app through the app store or other means at all

Therefore, Apple is suggesting that PWAs are a viable alternative to the app store. Apple continues,

PWAs are increasingly available for and through mobile based browsers and devices, including on iOs [my emphasis]. PWAs are apps that are built using common web technology like HTML 5, but have the look feel and functionality of a native app

But, as we will see this, isn’t true. PWAs do not have the same functionality as a native app.

Let’s recap about what a PWA actually is. It’s a bit of an umbrella term, but fundamentally it’s a website, but it can install to the home screen of your mobile device, just like a native app. And it can have its own icon defined by the web developer, launch full screen, portrait or landscape, depending up on what the developer chooses, but it lives on the web.

And this is a really important. It’s important for several reasons. Firstly, because it lives on the web, you’re not downloading everything in one go. A native app is like downloading a whole website all at once. A PWA, you download just the shell, and as you navigate around it, it will get any content it doesn’t already have stored from the web server. Another advantage is that means that you have instant publication. The moment you press the button and it goes on your web server, the next person who opens your app on their device will see the new version. Great for security updates and also great for time-sensitive material.

Another great thing is, of course, is that because there’s no app store, no gatekeeper. Smaller shops don’t have to pay the 15 to 30% tax to Google and Apple. And also, for industries that are not compatible with the app stores, such as adult entertainment, gambling, et cetera, it’s a mechanism of giving an app because they can’t use the app store. And PWAs can work offline.

Let’s briefly look at one. I’m here speaking in a personal capacity. I don’t represent any of my clients or employers, so I can’t show you one of their apps, but I took 15 minutes to convert my WordPress blog into a PWA.

Okay, here, I’m using Firefox on Android. Android, unlike iOS, allows not only different badged browsers, but it actually allows that full browser on, so Firefox on Android is using Firefox’s own Gecko engine. Here, Firefox and Android are collaborating and they can tell when a user goes to my website that it is a PWA, and it offers the opportunity to the visitor to add it to the home screen. You can see it says “you can easily add this website to your device’s home screen to have instant access and browse faster with an app-like experience”. As a developer, I didn’t make this and the user doesn’t have to know anything because the operating system and the browser just show this. If the user chooses, they can add to home screen, they can click on the icon, and it will add automatically or else they can drag it themselves to where they want it.

And here, as you can see, it’s now on my Android home screen. There’s a little Firefox icon at the bottom of my logo to tell the user it’s a PWA. Chrome doesn’t do this, but it does also detect that it’s a PWA and offer to add it to the home screen. And as you can see from the screenshot on the right, when they tickle that icon into life, up pops my website full screen, without an address bar, looking like a native app. If however, a user goes to my website in a browser that doesn’t support PWAs, such as UC Web here, they don’t get offered the add to homescreen thing, they just see my website and navigate it as normal. So people with capable browsers get an app like experience, everybody else just gets the website and nobody loses out.

On iOS it’s subtly, but importantly, different.

So on iOS, if you go to my website, you’re not told anything about this being installable. You have to know to click that icon in the bottom bar that I’ve circled in red. And once you’ve done that, then you get the screen on the right. And you just have to know as a user to scroll down, whereupon you’ll see add to home screen there with no instructions or explanation of what’s going to happen. Anyway, once you click that, then you confirm, and finally it’s on the homescreen. And when you press it, it launches full screen without an address bar. But it still, even though it’s much harder to discover this, it still can’t do everything that a native app could do.

This is Echo Pharmacy. It’s a service that I’ve been using during the pandemic. It’s nothing to do with my employers or clients. I’m just a customer. I use it to get my repeat prescriptions, and I used it and it worked really well. And on the confirmation screen, after I’d ordered some more medication, it said, “Hey, why not use our app?” And I thought, well, why would I need an app, because this website has worked perfectly well for me. So I asked on Twitter, I said, “Why does it need me to download the app?” The website allows me to place and track my meds, why clog up my device, because it’s a download, right? Is it for push notifications, because on iOS websites and PWAs cannot send push notifications. And for a pharmacy like this, it’s important for them to say, “It’s 8:00 PM, have you taken this tablet?” Or, “We notice in four days time, you’re going to run out of this medication. Please come back and order.”

And I got a reply from the CTO of Echo Pharmacy saying,

Hey Bruce, you’re right about push notifications, which for some people are preferable to email or SMS reminders. Beyond that our app and website have feature parity,so it’s really just about what you prefer.

In other words, these people at Echo Pharmacy, not only have they got a really great website, but they also have to build an app for iOS just because they want to send push notifications. And, perhaps ironically, given Apple’s insistence that they do all of this for security and privacy, is that if I did choose to install this app, I would also be giving it permission to access my health and fitness data, my contact info, my identifiers sensitive info, financial info, user content, user data and diagnostics. Whereas, if I had push notifications and I were using a PWA, I’d be leaking none of this data.

So, we can see that despite Apple’s claims, I cannot recommend a PWA as being an equal experience an iOS simply here because of push notifications. But it’s not just hurting current business, it’s also holding back future business. A lot of my clients are looking to expand into new markets. The UN has said by the end of this century, 50% of the population of the world will live in these 10 countries. And as you can see, only one of them is a traditional market for a UK business, the US. And it’s obvious why people want to do this. Whereas, the population of the “west”is not increasing, it’s increasing a lot in new markets.

“The Southeast Asian internet economy is expected to reach $200 billion by 2025“, for example, but all countries are not equal in the technological stakes. Here, for example, is the percentage of monthly income that it takes to buy one gigabyte of mobile data. In the UK, for a gigabyte, (not actually that much [data]), it costs me 0.16% of my monthly income. In India, it’s a little bit more at 0.05%. In the states, 0.1%. In Rwanda, where one of my customers is doing great business, it’s 0.5% of monthly income. In Yemen, where I was born, it’s very nearly 16% of monthly income. So, big downloads cost a lot of money, particularly for something like a medication tracking system, which I might use only once every quarter. It’s a heck of a lot to download at these prices.

VisualCapitalist.com say worldwide, the highest average costs for one gigabyte of data is 30,000% more than the cheapest average price. And it’s not just about data costs either. It’s about data speed. Here, for example, in the UK, the average or the median UK speed is 28.51 megabits per second, in the States, it’s 55. In Hong Kong, it’s 112. In Rwanda, it’s 0.81. In Cambodia, it’s 1.29. In India, it’s 4. In Indonesia, it’s 1.88.

So, a big download not only costs potential customers more, but it takes a lot longer. Facebook recognized this when they released their Facebook Lite app. Facebook said

downloading a typical app with 20 megabyte APK can take more than 30 minutes on a 2G network, and the download is likely to fail before completion due to the flaky nature of the network

Twitter released something called Twitter Lite, which is now just everybody’s Twitter. The Lite’s a PWA, and Twitter said

Twitter Lite is network resilient. To reach every person on the planet, we need to reach people on slow and unreliable networks. Twitter Lite is interactive in under five seconds over 3G on most devices. Most of the world is using 2G or 3G networks. A fast initial experience is essential

So, if we could offer PWAs, we would have a competitive advantage over other people, have much faster installation and a much cheaper installation. Penny McLachlan, who works as a product manager at Google said,

A typical Android app size [a native app, in other words], is about 11 megabytes. Web APKs, TWAs, [this is technical speak for a Progressive Web App], are usually in the range of seven to 900 kilobytes

So, only about 10% of the size of an Android app. So, we can see that PWAs is a competitive advantage for people trying to do business in the wider world, because at post-Brexit, global Britain, et cetera.

It’s no coincidence that the majority of the early PWAs that we saw coming out in 2016, 2017 came out of Asia and Africa, and that’s because when people who make websites and web apps use the same devices and the same networks as their consumers, they absolutely understand why PWAs are great.

But, we can’t make a PWA because we have to care about Safari, which is more than 50% of our traffic in our home market. So, what can I advise my clients to do? Well, a lot of them have settled on something called React Native. They can’t afford to have one developer team for Android, one developer team for iOS, and one developer team for web.

If you’re a massive organization such as Google or Amazon, of course you can afford that, but developers ain’t cheap. And three teams is prohibitively expensive for a smaller organization. React Native, which is open source (but stewarded by Facebook), promises that you write once and it works across platforms, where you’re writing in a language called React, and it compiles down to native applications. So you write in React, you press a button, and it squirts out a native Android app and a native iOS app. And it’s pretty good, I guess.

There’s pros and cons, of course. The pros we write just once. Great. The cons. Well, it still emits a native app, so we still have to submit to app stores, and the timing of that is at the whim of the whoever’s moderating the app store, et cetera. It’s still a big download because it’s a native app.

It’s a more complex built process. There’s not a great deal of build to the web. React Native requires lots of moving parts, internally. It’s harder for us to test the functionality because although we write it once, we have to test it on iOS and we had to test it on Android, and Android testing isn’t as simple as iOS because there’s a massive difference between a top-of-the-range Google Pixel device, or the expensive Samsung devices and the kind of no name cheap Android devices you can get in street markets in Bangkok and Jakarta and Mumbai.

And a problem for me, and for one of my clients, is accessibility. Accessibility is making sure that your app or your site can be used by people with disabilities. Apple and Google have done great jobs making iOS and Android accessible, but React Native has some quirks under the hood, which make it much trickier.

One of the reasons for this is that React Native is quite a new technology and the web is a mature technology, and accessibility on the web is pretty much a solved problem. It isn’t solved if developers don’t do the right thing, but Chrome is 98.5% compliant with the W3C guidelines for accessibility. Microsoft Edge is a 100%. Firefox is 94%. Safari is 97%.

But because we’re writing in React and it’s doing magic tricks under the hood, there are holes in its accessibility. Facebook themselves say,

we found that React Native APIs provide strong support for accessibility. However, we also found many core components do not yet fully utilize platform accessibility APIs. Support is missing for some platform specific features.

They say it’s true, that support is missing. But for example, if you have an external keyboard plugged into an Android device, perhaps because you have Parkinson’s, or (like me) you have multiple sclerosis and you prefer to use a keyboard than the tiny little software keyboards. But if you have an external keyboard, you cannot use that keyboard to focus in to enter any information in any input field that’s made with React Native [on Android]. This of, course, is a huge problem.

Facebook has done some effort into writing up all of their accessibility problems. Here, for example, we can see this is a GitHub board with all the issues and I scroll, scroll, scroll, scroll, scroll through them. There is apparently a fix for the inability to focus into any input fields with Android and a keyboard, but we are at the mercy of Facebook’s release schedules here.

So, what could I advise my clients to do? I tend to advise them, go PWA and let’s hope that the CMA make Apple do the right thing. What I would really like to do is say to them, “Okay, just put this notice on your web app. Say: ‘we’d like to send you push notifications, if that’s okay by you. Please download Microsoft Edge, Mozilla Firefox, or Google Chrome on iOS’.”

However, if they do download Edge for iOS, Firefox for iOS, or Chrome for iOS, that’s no help at all because Apple prohibits those browsers using their own more capable engines and they have to use the Safari engine under the hood. So, consumers get no choice at all.

September 05, 2021

Last week I was part of a meeting with the UK’s Competition and Markets Authority, the regulator, to talk about Apple devices and the browser choice (or lack of it) on them. They’re doing a big study into Apple’s conduct in relation to the distribution of apps on iOS and iPadOS devices in the UK, in particular, the terms and conditions governing app developers’ access to Apple’s App Store, and part of that involves looking at browsers on iOS, and part of that involves talking to people who work on the web. So myself and Bruce Lawson and another UK developer of iOS and web apps put together some thoughts and had a useful long meeting with the CMA on the topic.

They asked that we keep confidential the exact details of what was discussed and asked, which I think is reasonable, but I did put together a slide deck to summarise my thoughts which I presented to them, and you can certainly see that. It’s at kryogenix.org/code/cma-apple and shows everything that I presented to the CMA along with my detailed notes on what it all means.

A slide from the presentation, showing a graph of how far behind Safari is and indicating that all other browsers on iOS are equally far behind, because they're all also Safari

Bruce had a similar slide deck, and you can read his slides on iOS’s browser monopoly and progressive web apps. Bruce has also summarised our other colleague’s presentation, which is what we led off with. The discussion that we then went into was really interesting; they asked some very sensible questions, and showed every sign of properly understanding the problem already and wanting to understand it better. This was good: honestly, I was a bit worried that we might be trying to explain the difference between a browser and a rendering engine to a bunch of retired colonel types who find technology to be baffling and perhaps a little unmanly, and this was emphatically not the case; I found the committee engaging and knowledgeable, and this is encouraging.

In the last few weeks we’ve seen quite a few different governments and regulatory authorities begin to take a stand against tech companies generally and Apple’s control over your devices more specifically. These are baby steps — video and music apps are now permitted to add a link to their own website, saints preserve us, after the Japan Fair Trade Commission’s investigation; developers are now allowed to send emails to their own users which mention payments, which is being hailed as “flexibility” although it doesn’t allow app devs to tell their users about other payment options in the app itself, and there are still court cases and regulatory investigations going on all around the world. Still, the tide may be changing here.

What I would like is that I can give users the best experience on the web, on the best mobile hardware. That best mobile hardware is Apple’s, but at the moment if I want to choose Apple hardware I have to choose a sub-par web experience. Nobody can fix this other than Apple, and there are a bunch of approaches that they could take — they could make Safari be a best-in-class experience for the web, or they could allow other people to collaborate on making the browser best-in-class, or they could stop blocking other browsers from their hardware. People have lots of opinions about which of these, or what else, could and should be done about this; I think pretty much everyone thinks that something should be done about it, though. Even if your goal is to slow the web down and to think that it shouldn’t compete with native apps, there’s no real reason why flexbox and grid and transforms should be worse in Safari, right? Anyway, go and read the talk for more detail on all that. And I’m interested in what you think. Do please hit me up on Twitter about this, or anything else; what do you think should be done, and how?

September 03, 2021

Reading List 281 by Bruce Lawson (@brucel)

September 02, 2021

There’s been a bit of a thing about software user experience going off the rails lately. Some people don’t like cross-platform software, and think that it isn’t as consistent, as well-integrated, or as empathetic as native software. Some people don’t like native software, thinking that changes in the design of the browser (Apple), the start menu (Microsoft), or everything (GNOME) herald the end of days.

So what’s going on? Why did those people make that thing that you didn’t like? Here are some possibilities.

My cheese was moved

Plenty of people have spent plenty of time using plenty of computers. Some short-sighted individual promised “a computer on every desktop”, made it happen, and this made a lot of people rather angry.

All of these people have learned a way of using these computers that works for them. Not necessarily the one that you or anybody else expects, but one that’s basically good enough. This is called satisficing: finding a good enough way to achieve your goal.

Now removing this satisficing path, or moving it a few pixels over to the left, might make something that’s supposedly better than what was there before, but is actually worse because the learned behaviour of the people trying to use the thing no longer achieves what they want.
It may even be that the original thing is really bad. But because we know how to use it, we don’t want it to change.

Consider the File menu. In About Face 3: The Essentials of Interaction Design, written in 2007, Alan Cooper described all of the problems with the File menu and its operations: New, Open, Save, Save As…. Those operations are implementation focused. They tell you what the computer will do, which is something the computer should take care of.

He described a different model, based on what people think about how their documents work. Anything you type in gets saved (that’s true of the computer I’m typing this in to, which works a lot like a Canon Cat). You can rename it if you want to give it a different name, and you can duplicate it if you want to give it a different name while keeping the version at the original name.

This should be better, because it makes the computer expose operations that people want to do, not operations that the computer needs to do. It’s like having a helicopter with an “up” control instead of a cyclic and collective controls.

Only, replacing the Open/Save/Save As… stuff with the “better” stuff is like removing the cyclic and collective controls and giving a trained helicopter pilot with years of experience the “up” button. It doesn’t work the way they expect, they have to think about it which they didn’t have to do with the cyclic/collective controls (any more), therefore it’s worse (for them).

Users are more experienced and adaptable now

But let’s look at this a different way. More people have used more computers now than at any earlier point in history, because that’s literally how history works. And while they might not like having their cheese moved, they’re probably OK with learning how a different piece of cheese works because they’ve been doing that over and over each time they visit a new website, play a new game, or use a new app.

Maybe “platform consistency” and “conform with the human interface/platform style guidelines” was a thing that made sense in 1984, when nobody who bought a computer with a GUI had ever used one before and would have to learn how literally everything worked. But now people are more sophisticated in their use of computers, regularly flit between desktop applications, mobile apps, and websites, across different platforms, and so are more flexible and adaptable in using different software with different interactions than they were in the 1980s when you first read the Amiga User Interface Style Guide.

We asked users; they don’t care

At first glance, this explanation seems related to the previous one. We’re doing the agile thing, and talking to our customers, and they’ve never mentioned that the UI framework or the slightly inconsistent controls are an issue.

But it’s actually quite different. The reason users don’t mention that there’s extra cognitive load is that these kinds of mental operations are tacit knowledge. If you’re asked about “how can we improve your experience filing taxes”, you’ll start thinking tax-related questions, before you think “I couldn’t press Ctrl-A to get to the beginning of that text field”. I mean, unless you’re a developer who goes out of their way to look for that sort of inconsistency in software.

The trick here is to stop asking, and start watching. Users may well care, even if they don’t vocalise that caring. They may well suffer, even if they don’t realise it hard enough to care.

We didn’t ask users

Yes, that happens. I’ve probably gone into enough depth on why it happens in various places, but here’s the summary: the company has a customer proxy who doesn’t proxy customers.

September 01, 2021

August 24, 2021

Respectable persons of this parish of Internet have been, shall we say, critical of Scrum and its ability to help makers (particularly software developers) to make things (particularly software). Ron Jeffries and GeePaw Hill have both deployed the bullshit word.

My own view, which I have described before, is that Scrum is a baseline product delivery process and a process improvement framework. Unfortunately, not many of us are trained or experienced in process improvement, not many of us know what we should be measuring to get a better process, so the process never improves.

At best, then, many Scrum organisations stick with the baseline process. That is still scadloads better than what many software organisations were trying before they had heard of Agile and Scrum, or after they had heard of it and before their CIO’s in-flight magazine made it acceptable. As a quick recap for those who were recruited into the post-Agile software industry, we used to spend two months working out all the tasks that would go into our six month project. Then we’d spend 15 months trying to implement them, then we’d all point at everybody else to explain why it hadn’t worked. Then, if there was any money left, we’d finally remember to ship some software to the customer.

Scrum has improved on that, by distributing both the shipping and the blaming over the whole life span of the project. This was sort of a necessary condition for software development to survive the dot-com crash, when “I have an idea” no longer instantly led to office space in SF, a foosball table, and thirty Herman Miller chairs. You have to deliver something of value early because you haven’t got enough money any more to put it off.

So when these respectable Internet parishioners say that Scrum is bad, this is how bad it has to be to reach that mark. Both of the authors I cite have been drinking from this trough way longer than I have, so have much more experience of the before times.

In this article I wanted to look at some of the things I, and software engineering peers around me, have done to make Scrum this bad. Because, as Ron says, Scrum is systemically bad, and we are part of the system.

Focus on Velocity

One thing it’s possible to measure is how much stuff gets shovelled in per iteration. This can be used to guide a couple of process-improvement ideas. First up is whether our ability to foresee problems arising in implementation is improving: this is “do estimates match actuals” but acknowledging that they never do, and the reason they never do is that we’re too optimistic all the time.

Second is whether the amount of stuff you’re shovelling is sustainable. This has to come second, because you have to have a stable idea of how much stuff there is in “some stuff” before you can determine whether it’s the same stuff you shovelled last time. If you occasionally shovel tons of stuff, then everybody goes off sick and you shovel no stuff, that’s not sustainable pace. If you occasionally shovel tons of stuff, then have to fix a load of production bugs and outages, and do a load of refactoring before you can shovel any more stuff, that’s not a sustainable pace, even if the amount of work in the shovelling phase and the fixing phase is equivalent.

This all makes working out what you can do, and what you can tell other people about what you can do, much easier. If you’re good at working out where the problems lie, and you’re good at knowing how many problems you can encounter and resolve per time, then it’s easier to make plans. Yes, we value responding to change over following a plan, but we also value planning our response to change, and we value communicating the impact of that response.

Even all of that relies on believing that a lot of things that could change, won’t change. Prediction and forecasting aren’t so chaotic that nobody should ever attempt them, but they certainly are sensitive enough to “all things being equal” that it’s important to bear in mind what all the things are and whether they have indeed remained equal.

And so it’s a terrible mistake to assume that this sprint’s velocity should be the same as, or (worse) greater than, the last one. You’re taking a descriptive statistic that should be handled with lots of caveats and using it as a goal. You end up doing the wrong thing.

Let’s say you decide you want 100 bushels of software over the next two weeks, because you shovelled 95 bushels of software last week. The easiest way to achieve that is to take things that you think add up to 100 bushels of software, and do less work so that they contain only, say, 78 bushels, and try to get those 78 bushels done in the time.

Which bits do you cut off? The buttons that the customer presses? No, they’ll notice that. The actions that happen when the customer presses the buttons? They’ll probably notice that, too. How about all the refinement and improvement that will make it possible to add another 100 bushels next time round? Nobody needs that to shovel this 100 bushels in!

But now, next time, shovelling software is harder, and you need to get 110 bushels in to “build on the momentum”. So more corners get cut, and more evenings get worked. And now it’s even harder to get the 120 bushels in that are needed the next time. Actual rate of software is going down, claimed rate of software is going up: soon everything will explode.

Separating “technical” and “non-technical”

Sometimes also “engineering” and “the business”. Particularly in a software company, this is weird, because “the business” is software engineering, but it’s a problem in other lines of business enabled by software too.

Often, domain experts in companies have quite a lot of technical knowledge and experience. In one fintech where I used to work, the “non-technical” people had a wealth (pun intended) of technical know-how when it came to financial planning and advice. That knowledge is as important to the purpose of making financial technology software as is software knowledge. Well, at least as important.

So why do so many teams delineate “engineering” and “the business”, or “techies” and “non-technical” people? In other words, why do software teams decide that only the software typists get a say in what software gets made using what practices? Why do people divide the world into pigs and chickens (though, in fairness to the Scrum folks, they abandoned that metaphor)?

I think the answer may be defensiveness. We’ve taken our ability to shovel 50 bushels of software and our commitment (it used to be an estimate, but now it’s a commitment) to shovel 120 bushels, and it’s evident that we can’t realistically do that. Why not? Oh it must be those pesky product owners, keep bringing their demands from the VP of marketing when all they’re going to do is say “build more”, “shovel more software”, and they aren’t actually doing any of it. If the customer rep didn’t keep repping the customer, we’d have time to actually do the software properly, and that would fix everything.

Note that while the Scrum guide no longer mentions chickens and pigs, it “still makes a distinction between members of the Scrum team and those individuals who are part of the process, but not responsible for delivering products”. This is an important distinction in places, but irrelevant and harmful in others. But it’s at its most harmful when it’s too narrowly drawn. When people who are part of the process are excluded from the delivering-products cabal. You still need to hear from them and use their expertise, even if you pretend that you don’t.

The related problem I have seen, and been part of, is the software-expertise people not even gaining passing knowledge of the problem domain. I’ve even seen it working on a developer tools team where the engineers building the tools didn’t have cause to use, or particularly even understand, the tool during our day-to-day work. All of those good technical practices, like automated acceptance tests, ubiquitous language, domain-driven design; they only mean shit if they’re helping to make the software compatible with the things people want the software for.

And that means a little bit of give and take on the old job boundaries. Let the rest of the company in on some info about how the software development is going, and learn a little about what the other folks you work with do. Be ready to adopt a more nuanced position on a new feature request than “Jen from sales with another crazy demand”.

Undriven process optimisation

As I said up top, the biggest problem Scrum often encounters in the wild is that it’s a process improvement framework run by people who don’t have any expertise at process improvement, but have read a book on Scrum (if that). This is where the agile consultant / retrospective facilitators usually come in: they do know something about process improvement, and even if they know nothing about your process it’s probably failing in similar enough ways to similar processes on similar teams that they can quickly identify the patterns and dysfunctions that apply in your context.

Without that guidance, or without that expertise on the team, retrospectives are the regular talking shop in which everybody knows that something is wrong, nobody knows what, and anybody who has a pet thing to try can propose that thing because there are no cogent arguments against giving it a go (nor are there any for it, except that we’ve got to try something!).

Thus we get resume-driven development: I bet that last sprint was bad because we aren’t functionally reactive enough, so we should spend the next sprint rewriting to this library I read about on medium. Or arbitrary process changes: this sprint we should add a column to the board, because we removed a column last time and look what happened. Or process gaming: I know we didn’t actually finish anything this month, but that’s because we didn’t deploy so if we call something “done” when it’s been typed into the IDE, we’ll achieve more next month. Or more pigs and chickens: the problem we had was that sales kept coming up with things customers needed, so we should stop the sales people talking either to the customers, to us, or to both.

Work out what it is you’re trying to do (not shovelling bushels of software, but the thing you’re trying to do for which software is potentially a solution) and measure that. Do you need to do more of that? Or less of it? Or the same amount for different people? Or for less money? What, about the way you’re shovelling software, could you change that would make that happen?

We’re back at connecting the technical and non-technical parts of the work, of course. To understand how the software work affects the non-software goals we need to understand both, and their interactions. Always have, always will.

August 23, 2021

August 19, 2021

August 17, 2021

August 16, 2021

Sleep on it by Graham Lee

In my experience, the best way to get a high-quality software product is to take your time, not crunch to some deadline. On one project I led, after a couple of months we realised that the feature goals (ALL of them) and the performance goals (also ALL of them, in this case ALL of the iPads back to the first-gen, single core 2010 one) were incompatible.

We talked to the customer, worked out achievable goals with a new timeline, and set to work with a new idea based on what we had found was possible. We went from infrequently running Instruments (the Apple performance analysis tool) to daily, then every merge, then every proposed change. If a change led to a regression, find another change. Customers were disappointed that the thing came out later than they originally thought, but it was still well-received and well-reviewed.

At the last release candidate, we had two known problems. Very infrequently sound would stop playing back, which with Apple’s DTS we isolated to an OS bug. After around 12 hours of uptime (in an iPad app, remember!) there was a chance of a crash, which with DTS we found to be due to calling an animation API in a way consistent with the documentation but inconsistent with the API’s actual expectations. We managed to fix that one before going live on the App Store.

On the other hand, projects I’ve worked on that had crunch times, weekend/evening working, and increased pressure to deliver from management ended up in a worse state. They were still late, but they tended to be even later and lower quality as developers who were under pressure to fix their bugs introduced other bugs by cutting corners. And everybody got upset and angry with everybody else, desperate to find a reason why the late project wasn’t my fault alone.

In one death march project, my first software project which was planned as a three month waterfall and took two years to deliver, we spent a lot of time shovelling bugs in at a faster rate than we were fixing them. On another, an angry product manager demanded that I fix bugs live in a handover meeting to another developer, without access to any test devices, then gave the resulting build to a customer…who of course discovered another, worse bug that had been introduced.

If you want good software, you have to sleep on it, and let everybody else sleep on it.

August 04, 2021

August 01, 2021

July 30, 2021

Reading List 280 by Bruce Lawson (@brucel)

  • Link ‘o the week: Safari isn’t protecting the web, it’s killing it – also featuring “Safari is killing the web through show-stopping bugs” eg, IndexedDB, localStorage, 100vh, Fetch requests skipping the service worker, and all your other favourites. Sort it out, NotSteve.
  • Do App Store Rules Matter? – “half of the revenue comes from just 0.5% of the users … last year, Apple made $7-8bn in commission revenue from perhaps 5m people spending over $450 a quarter on in-app purchases in games. Apple has started talking a lot about helping you use your iPhone in more healthy ways, but that doesn’t seem to extend to the app store.”
  • URL protocol handler registration for PWAs – Chromium 92 will let installed PWAs handle links that use a specific protocol for a more integrated experience.
  • Xiaomi knocks Apple off #2 spot – but will it become the next Huawei? – “Chinese manufacturer Xiaomi is now the second largest smartphone maker in the world. In last quarter’s results it had a 17% share of global smartphone shipments, ahead of Apple’s 14% and behind Samsung’s 19%
  • One-offs and low-expectations with Safari – “Jen Simmons solicited some open feedback about WebKit, asking what they might “need to add / change / fix / invent to help you?” … If I could ask for anything, it’d be that Apple loosen the purse strings and let Webkit be that warehouse for web innovation that it was a decade ago.” by Dave Rupert
  • Concern trolls and power grabs: Inside Big Tech’s angry, geeky, often petty war for your privacy – “Inside the World Wide Web Consortium, where the world’s top engineers battle over the future of your data.
  • Intent to Ship: CSS Overflow scrollbar-gutter – Tab Atkins explains “the space that a scrollbar *would* take up is always reserved regardless, so it doesn’t matter whether the scrollbar pops in or not. You can also get the same space reserved on the other side for visual symmetry if you want.”
  • Homepage UX – 8 Common Pitfalls starring 75% with carousels implement them badly (TL;DR: don’t bother); 22% hide the Search field; 35% don’t implement country and language selection correctly; 43% don’t style clickable interface elements effectively
  • WVG – Web Vector Graphics – a proof of concept spec from yer man @Hixie on a vector graphics format for Flutter (whatever that is). Interesting discussions on design decisions and trade offs.

July 21, 2021

July 19, 2021

July 13, 2021

$ docker container list -a

I noticed I had a large number of docker containers that weren’t being used. Fortunately docker makes it easy to single out a container from this list and remove all containers that were created before it.

docker ps -aq -f "before=widgets_web_run_32cb6f5d2d71" | xargs docker container rm

There are a bunch of handy filters that can be applied to docker ps. Nice for a bit of spring cleaning.

July 09, 2021

Reading List 279 by Bruce Lawson (@brucel)

I’ve had a number of conversations about what “we” in the “free software community” “need” to do to combat the growth in proprietary, user-hostile and customer-hostile business models like cloud user-generated content hosts, social media platforms, hosted payment platforms, videoconferencing services etc. Questions can often be summarised as “what can we do to get everyone off of Facebook groups”, “how do we get businesses to adopt Jitsi Meet instead of Teams” or “how do we convince everyone that Mattermost is better for community chat than Slack”.

My answer is “we don’t”, which is very different from “we do nothing about those things”. Scaled software platforms introduce all sorts of problems that are only caused by trying to operate the software at scale, and the reason the big Silicon Valley companies are that big is that they have to spend a load of resources just to tread water because they’ve made everything so complex for themselves.

This scale problem has two related effects: firstly the companies are hyper-concerned about “growth” because when you’ve got a billion users, your shareholders want to know where the next hundred million are coming from, not the next twenty. Secondly the companies are overly-focused on lowest common denominator solutions, because Jennifer Miggins from South Shields is a rounding error and anything that’s good enough for Scott Zablowski from Los Angeles will have to be good enough for her too, and the millions of people on the flight path between them.

Growth hacking and lowest common denominator experiences are their problems, so we should avoid making them our problems, too. We already have various tools for enabling growth: the freedom to use the software for any purpose being one of the most powerful. We can go the other way and provide deeply-specific experiences that solve a small collection of problems incredibly well for a small number of people. Then those people become super-committed fans because no other thing works as well for them as our thing, and they tell their small number of friends, who can not only use this great thing but have the freedom to study how the program works, and change it so it does their computing as they wish—or to get someone to change it for them. Thus the snowball turns into an avalanche.

Each of these massive corporations with their non-free platforms that we’re trying to displace started as a small corporation solving a small problem for a small number of people. Facebook was internal to one university. Apple sold 500 computers to a single reseller. Google was a research project for one supervisor. This is a view of the world that’s been heavily skewed by the seemingly ready access to millions of dollars in venture capital for disruptive platforms, but many endeavours don’t have access to that capital and many that do don’t succeed. It is ludicrous to try and compete on the same terms without the same resources, so throw Marc Andreessen’s rulebook away and write a different one.

We get freedom to a billion people a handful at a time. That reddit-killing distributed self-hosted tool you’re building probably won’t kill reddit, sorry. Design for that one farmer’s cooperative in Skåne, and other farmers and other cooperatives will notice. Design for that one town government in Nordrhein-Westfalen, and other towns and other governments will notice. Design for that one biochemistry research group in Brasilia, and other biochemists and other researchers will notice. Make something personal for a dozen people, because that’s the one thing those massive vendors will never do and never even understand that they could do.

July 02, 2021

(Last Updated on )

All the omens were there: a comet in the sky; birds fell silent; dogs howled; unquiet spirits walked the forests and the dark, empty places. Yes, it was a meeting request from Marketing to discuss a new product page with animations that are triggered on scroll.

Much as a priest grasps his crucifix when facing a vampire, I immediately reached for Intersection Observer to avoid the browser grinding to a halt when watching to see if something is scrolled into view. And, like an exoricst sprinkling holy water on a demon, I also cleansed the code with a prefers-reduced-motion media query.

prefers-reduced-motion looks at the user’s operating system settings to see if they wish to suppress animations (for performance reasons, or because animations make them nauseous–perhaps because they have a vestibular disorder). A responsible developer will check to see if the user has indicated that they prefer reduced motion and use or avoid animations accordingly.

Content images can be made to switch between animated or static using the &LTpicture> element (so brilliantly brought to you by a team of gorgeous hunks). Bradbury Frosticle gives this example:

<picture>
  <source srcset="no-motion.jpg" media="(prefers-reduced-motion: reduce)">
  <img srcset="animated.gif alt="brick wall"/>
</picture>

Now, obviously you are a responsible developer. You use alt text (and don’t call it “alt tags”). But unfortunately, not everyone is like you. There are also 639 squillion websites out there that were made before prefers-reduced-motion was a thing, and only 3 of them are actively maintained.

It seems to me that browsers could do more to protect their users. Browsers are, after all, user agents that protect the visitor from pop-ups, malicious sites, autoplaying videos and other denizens of the underworld. They should also protect users against nausea and migraines, regardless of whether the developer thought to (or had the tools available to).

So, I propose that browsers should never respect scroll-behavior: smooth; if a user prefers reduced motion, regardless of whether a developer has set the media query.

Animated GIFs (and their animated counterparts in more modern image formats) are somewhat more complex. Since 2018, the Chromium-based Vivaldi browser has allowed users to deactivate animated GIFs right from the Status Bar, but this isn’t useful in all circumstances.

My initial thought was that the browser should only show the first five seconds because WCAG Success Criterion 2.2.2 says

For any moving, blinking or scrolling information that (1) starts automatically, (2) lasts more than five seconds, and (3) is presented in parallel with other content, there is a mechanism for the user to pause, stop, or hide it

But this wouldn’t allow a user to choose to see more than 5 seconds. The essential aspect of this success criterion is, I think, control. Melanie Richards, who does accessibility on Microsoft Edge browser, noted

IE had a neat keyboard shortcut that would pause any GIFs on command. I’d be curious to get your take on the relative usefulness of a browser setting such as you described, and/or a more contextual command.

I’m doubtful about discoverability of keyboard commands, but Travis Leithead (also Microsoft) suggested

We can use ML + your mic (with permission of course) to detect your frustration at forever looping Gifs and stop just the right one in real time. Or maybe just a play/pause control would help… +1 customer request!

And yes, a play/pause mechanism gives maximum control to the user. This would also solve the problem of looping GIFs in marketing emails, which are commonly displayed in browsers. There is an added complexity of what to do if the animated image is a link, but I’m not paid a massive salary to make web browsers, so I leave that to mightier brains than I possess.

In the meantime, until all browser vendors do my bidding (it only took 5 years for the <picture> element), please consider adding this to your CSS (by Thomas Steiner)


@media (prefers-reduced-motion: reduce) {
  *, ::before, ::after {
    animation-delay: -1ms !important;
    animation-duration: 1ms !important;
    animation-iteration-count: 1 !important;
    background-attachment: initial !important;
    scroll-behavior: auto !important;
    transition-duration: 0s !important;
    transition-delay: 0s !important;
  }
}

I am not an MBA, but it seems to me that making your users puke isn’t the best beginning to a successful business relationship.

July 01, 2021

June 25, 2021

June 24, 2021

June 23, 2021

June 21, 2021

June 18, 2021

Reading List 278 by Bruce Lawson (@brucel)

June 12, 2021

The latest in a series of posts on getting Unity Test Framework tests running in jenkins in the cloud, because this seems entirely undocumented.

To recap, we had got as far as running our playmode tests on the cloud machine, but we had to use the -batchmode -nographics command line parameters. If we don’t, we get tons of errors about non-interactive window sessions. But if we do, we can no longer rely on animation, physics, or some coroutines during our tests! This limits us to basic lifecycle and validation tests, which isn’t great.

We need our cloud machine to pretend there’s a monitor attached, so unity can run its renderer and physics.

First, we’re going to need to make sure we have enough grunt in our cloud machine to run the game at a solid frametate. We use ec2, with the g4dn.xlarge machine (which has a decent GPU) and the https://aws.amazon.com/marketplace/pp/prodview-xrrke4dwueqv6?ref=cns_srchrow#pdp-overview ami, which pre-installs the right GPU drivers.

To do this, we’re going to set up a non-admin windows account on our cloud machine (because that’s just good practice), get it to auto-login on boot and ask it to connect to jenkins under this account. Read on for more details.

First, set up your new windows account by remoting into the admin account of the cloud machine:

  • type “add user” in the windows start menu to get started adding your user. I call mine simply “jenkins”. Remember to save the password somewhere safe!
  • We need to be able to remote into the new user, so go to System Properties, and on the Remote tab click Select Users, and add your jenkins user
  • if jenkins has already run on this machine, you’ll want to give the new jenkins user rights to modify the c:\Workspace folder
  • You’ll also want to go into the Services app, find the jenkins service, and disable it.
  • Next, download autologon https://docs.microsoft.com/en-us/sysinternals/downloads/autologon, uncompress it somewhere sensible, then run it.
    • enter your new jenkins account details
    • click Enable
    • close the dialog

Now, log out of the admin account, and you should be able to remote desktop into the new account using the credentials you saved.

Now we need to make this new account register the computer with your jenkins server once it comes online. More details here https://wiki.jenkins.io/display/JENKINS/Distributed+builds#Distributedbuilds-Agenttomasterconnections, and it may be a bit different for you depending on setup, but here’s what we do:

  • From the remote desktop of the jenkins user account, open a browser and log into your jenkins server
  • Go to the node page for your new machine, and configure the Launch Type to be Launch Agent By Connecting It To The Master
  • Switch to the node’s status tab and you should have an orange button to download the agent jnlp file
  • Put this file in the %userprofile%\AppData\Roaming\Microsoft\Windows\Start Menu\Programs\Startup folder
  • Change the Launch Type back to whatever you need (we use the Slave Setup plugin, despite the icky name https://plugins.jenkins.io/slave-setup/) — it doesn’t need to stay as Launch Agent By Connecting It To The Master.

We’re done. log out of remote desktop and reboot the machine. You should see it come alive in the jenkins server after a few minutes. If you remove the -batchmode and -nographics options from your unity commands, you should see the tests start to run with full physics and animation!

June 11, 2021

June 10, 2021

I’ve talked before about the non-team team dynamic that is “one person per task”. Where the management and engineers collude to push the organisation beyond a sustainable pace by making sure that at all times, each individual is kept busy and collaboration is minimised.

I talked about the deleterious effect on collaboration, particularly that code review becomes a burden resolved with a quick “LGTM”. People quickly develop specialisations and fiefdoms: oh there’s a CUDA story, better give it to Yevgeny as he worked on the last CUDA story.

The organisation quickly adapts to this balkanisation and optimises for it. Is there a CUDA story in the next sprint? We need something for Yevgeny to do. This is Conway’s Corrolary: the most efficient way to develop software is when the structure matches the org chart.

Basically all forms of collaboration become a slog when there’s no “us” in team. Unfortunately, the contradiction at the heart of this 19th century approach to division of labour is that, when applied to knowledge work, the value to each participant of being in meetings is minimised, while the necessity for each participant to be in a meeting is maximised.

The value is minimised because each person has their personal task within their personal fiefdom to work on. Attending a meeting takes away from the individual productivity that the process is optimising for. Additionally, it increases the likelihood that the meeting content will be mostly irrelevant: why should I want to discuss backend work when Sophie takes all the backend tasks?

The meetings are necessary, though, because nobody owns the whole widget. No-one can see the impact of any workflow change, or dependency adoption, or clean up task, because nobody understands more than a 1/N part of the whole system. Every little thing needs to be run by Sophie and Yevgeny and all the others because no-one is in a position to make a decision without their input.

This might sound radically democratic, and not the sort of thing you’d expect from a business: nobody can make a decision without consulting all the workers! Power to the people! In fact it’s just entirely progress-destroying: nobody can make a decision at all until they’ve got every single person on board, and that’s so much work that a lot of decisions will be defaulted. Nothing changes.

And there’s no way within this paradigm to avoid that. Have fewer meetings, and each individual is happier because they get to maximise progress time spent on their individual tasks. But the work will eventually grind to a halt, as the architecture reflects N different opinions, and the N! different interfaces (which have each fallen into the unowned gaps between the individual contributors) become harder to work with.

Have more meetings, and people will grumble that there are too many meetings. And that Piotr is trying to land-grab from other fiefdoms by pushing for decisions that cross into Sophie’s domain.

The answer is to reconstitute the team – preferably along self-organising principles – into a cybernetic organism that makes use of its constituent individuals as they can best be applied, but in pursuit of the team’s goals, not N individual goals. This means radical democracy for some issues, (agreed) tyranny for others, and collective ignorance of yet others.

It means in some cases giving anyone the autonomy to make some choices, but giving someone with more expertise the autonomy to override those choices. In some cases, all decisions get made locally, in others, they must be run past an agreed arbiter. In some cases, having one task per team, or even no tasks per team if the team needs to do something more important before it can take on another task.

June 04, 2021

…and this time it’s online! Back in 2019, we toured ‘A Moment of Madness’ to a number of UK festivals. AMoM is an immersive experience with live actors and Escape Game style puzzles where the players are on a stake-out in a multi-story car-park tracking politician Michael Makerson (just another politician with a closet full […]

Random Expo.io tips by Bruce Lawson (@brucel)

I’m doing some accessibility testing on a React Native codebase that uses Expo during development. If you’ve done something seriously wrong in a previous life and karma has condemned you to using React Native rather than making a Progressive Web App, Expo is jolly useful. It gives you a QR code that you can scan with Android or iOS to ‘install’ the app on your device and you get live reload of any changes. It’s like sinking in a quagmire of shit but someone nice is bringing you beer and playing Abba while it happens. (Beats me why that isn’t their corporate strapline.)

Anyway, I struggled a bit to set it up so here are some random tips that I learned the hard way:

  • If your terminal yells “Error: EMFILE: too many open files, watch at FSEvent.FSWatcher._handle.onchange (internal/fs/watchers.js:178:28) error Command failed with exit code 1” when you start Expo, stop it, do brew install watchman and re-start Expo. Why? No idea. Someone from StackOverflow told me. Most npm stuff is voodoo magic–just install all the things, hope none of them were made by in Moscow by Vladimir Evilovich of KGB Enterprises, ignore all the deprecation warnings and cross your fingers.
  • If you want to open your app in an iOS simulator, you need xcode and you need to install xcode command line tools or it’ll just hang.
  • Scrolling in iOS simulator is weird. Press the trackpad with one hand and scroll with other hand. Or enable three finger drag and have one hand free for coffee, smoking or whatever other filthy habits you’ve developed while working from home.
  • If you don’t want it, you can turn off the debugging menu overlay.
  • If you like CSS, under no circumstances view source of the app running in the browser. It is, however, full of lots of ARIA nourishment thanks to React Native for Web.

Who knows? One day, Apple may decide not to hamstring PWAs on iOS and we can all use the web to run on any device and any browser, just as Sir Uncle Timbo intended.

June 03, 2021

AMA by Graham Lee

It was requested on twitter that I start answering community questions on the podcast. I’ve got a few to get the ball rolling, but what would you like to ask? Comment here, or reach me wherever you know I hang out.

June 02, 2021

June 01, 2021

Having looked at hopefully modern views on Object-Oriented analysis and design, it’s time to look at what happened to Object-Oriented Programming. This is an opinionated, ideologically-motivated history, that in no way reflects reality: a real history of OOP would require time and skills that I lack, and would undoubtedly be almost as inaccurate. But in telling this version we get to learn more about the subject, and almost certainly more about the author too.
They always say that history is written by the victors, and it’s hard to argue that OOP was anything other than victorious. When people explain about how they prefer to write functional programs because it helps them to “reason about” code, all the reasoning that was done about the code on which they depend was done in terms of objects. The ramda or lodash or Elm-using functional programmer writes functions in JavaScript on an engine written in C++. Swift Combine uses a functional pipeline to glue UIKit objects to Core Data objects, all in an Objective-C IDE and – again – a C++ compiler.

Maybe there’s something in that. Maybe the functional thing works well if you’re transforming data from one system to another, and our current phase of VC-backed disruption needs that. Perhaps we’re at the expansion phase, applying the existing systems to broader domains, and a later consolidation or contraction phase will demand yet another paradigm.
Anyway, Object-Oriented Programming famously (and incorrectly, remember) grew out of the first phase of functional programming: the one that arose when it wasn’t even clear whether computers existed, or if they did whether they could be made fast enough or complex enough to evaluate a function. Smalltalk may have borrowed a few ideas from Simula, but it spoke with a distinct Lisp.

We’ll fast forward through that interesting bit of history when all the research happened, to that boring bit where all the development happened. The Xerox team famously diluted the purity of their vision in the hot-air balloon issue of Byte magazine, and a whole complex of consultants, trainers and methodologists jumped in to turn a system learnable by children into one that couldn’t be mastered by professional programmers.
Actually, that’s not fair: the system already couldn’t be mastered by professional programmers, a breed famous for assuming that they are correct and that any evidence to the contrary is flawed. It was designed to be learnable by children, not by those who think they already know better.

The result was the ramda/lodash/Elm/Clojure/F# of OOP: tools that let you tell your local user group that you’ve adopted this whole Objects thing without, y’know, having to do that. Languages called Object-*, Object*, Objective-*, O* added keywords like “class” to existing programming languages so you could carry on writing software as you already had been, but maybe change the word you used to declare modules.

Eventually, the jig was up, and people cottoned on to the observation that Object-Intercal is just Intercal no matter how you spell come.from(). So the next step was to change the naming scheme to make it a little more opaque. C++ is just C with Classes. So is Java, so is C#. Visual BASIC.net is little better.

Meanwhile, some people who had been using Smalltalk and getting on well with fast development of prototypes that they could edit while running into a deployable system had an idea. Why not tell everyone else how great it is to develop prototypes fast and edit them while running into the deployable system? The full story of that will have to wait for the Imagined History of Agile, but the TL;DR is that whatever they said, everybody heard “carry on doing what we’re already doing but plus Jira”.

Well, that’s what they heard about the practices. What they heard about the principles was “principles? We don’t need no stinking principles, that sounds like Big Thinking Up Front urgh” so decided to stop thinking about anything as long as the next two weeks of work would be paid for. Yes, iterative, incremental programming introduced the idea of a project the same length as the gap between pay checks, thus paving the way for fire and rehire.

And thus we arrive at the ideological void of today’s computering. A phase in which what you don’t do is more important than what you do: #NoEstimates, #NoSQL, #NoProject, #NoManagers…#NoAdvances.

Something will fill that void. It won’t be soon. Functional programming is a loose collection of academic ideas with negation principles – #NoSideEffects and #NoMutableState – but doesn’t encourage anything new. As I said earlier, it may be that we don’t need anything new at the moment: there’s enough money in applying the old things to new businesses and funnelling more money to the cloud providers.

But presumably that will end soon. The promised and perpetually interrupted parallel computing paradigm we were guaranteed upon the death of Moore’s Law in the (1990s, 2000s, 2010s) will eventually meet the observation that every object larger than a grain of rice has a decent ARM CPU in, leading to a revolution in distributed consensus computing. Watch out for the blockchain folks saying that means they were right all along: in a very specific way, they were.

Or maybe the impressive capability but limited applicability of AI will meet the limited capability but impressive applicability of intentional programming in some hybrid paradigm. Or maybe if we wait long enough, quantum computing will bring both its new ideas and some reason to use them.

But that’s the imagined future of programming, and we’re not in that article yet.

May 31, 2021

We left off in the last post with an idea of how Object-Oriented Analysis works: if you’re thinking that it used around a thousand words to depict the idea “turn nouns from the problem domain into objects and verbs into methods” then you’re right, and the only reason to go into so much detail is that the idea still seems to confuse people to this day.

Similarly, Object-Oriented Design – refining the overall problem description found during analysis into a network of objects that can simulate the problem and provide solutions – can be glibly summed up in a single sentence. Treat any object uncovered in the analysis as a whole, standalone computer program (this is called encapsulation), and do the object-oriented analysis stuff again at this level of abstraction.

You could treat it as turtles all the way down, but just as Physics becomes quantised once you zoom in far enough, the things you need an object to do become small handfuls of machine instructions and there’s no point going any further. Once again, the simulation has become the deployment: this time because the small standalone computer program you’re pretending is the heart of any object is a small standalone computer program.

I mean, that’s literally it. Just as the behaviour of the overall system could be specified in a script and used to test the implementation, so the behaviour of these units can be specified in a script and used to test the implementation. Indeed some people only do this at the unit level, even though the unit level is identical to the levels above and below it.

Though up and down are not the only directions we can move in, and it sometimes makes more sense to think about in and out. Given our idea of a User who can put things in a Cart, we might ask questions like “how does a User remember what they’ve bought in the past” to move in towards a Ledger or PurchaseHistory, from where we move out (of our problem) into the realm of data storage and retrieval.

Or we can move out directly from the User, asking “how do we show our real User out there in the world the activities of this simulated User” and again leave our simulation behind to enter the realm of the user interface or network API. In each case, we find a need to adapt from our simulation of our problem to someone’s (probably not ours, in 2021) simulation of what any problem in data storage or user interfaces is; this idea sits behind Cockburn’s Ports and Adapters.

Moving in either direction, we are likely to encounter problems that have been solved before. The trick is knowing that they have been solved before, which means being able to identify the commonalities between what we’re trying to achieve and previous solutions, which may be solutions to entirely different problems but which nonetheless have a common shape.

The trick object-oriented designers came up with to address this discovery is the Pattern Language (OK, it was architect Christopher Alexander’s idea: great artists steal and all that), in which shared solutions are given common names and descriptions so that you can explore whether your unique problem can be cast in terms of this shared description. In practise, the idea of a pattern language has fared incredibly well in software development: whenever someone says “we use container orchestration” or “my user interface is a functional reactive program” or “we deploy microservices brokered by a message queue” they are relying on the success of the patterns language idea introduced by object-oriented designers.

Meanwhile, in theory, the concept of patterns language failed, and if you ask a random programmer about design patterns they will list Singleton and maybe a couple of other 1990s-era implementations patterns before telling you that they don’t use patterns.

And that, pretty much, is all they wrote. You can ask your questions about what each object needs to do spatially (“who else does this object need to talk to in order to answer this question?”), temporally (“what will this object do when it has received this message?”), or physically (“what executable running on what computer will this object be stored in?”). But really, we’re done, at least for OOD.

Because the remaining thing to do (which isn’t to say the last thing in a sequence, just the last thing we have yet to talk about) is to build the methods that respond to those messages and pass those tests, and that finally is Object-Oriented Programming. If we start with OOP then we lose out, because we try to build software without an idea of what our software is trying to be. If we finish with OOP then we lose out, because we designed our software without using knowledge of what that software would turn into.

May 30, 2021

I’ve made a lot over the years, including the book, Object-Oriented Programming the Easy Way, of my assertion that one reason people are turned off from Object-Oriented Programming is that they weren’t doing Object-Oriented Design. Smalltalk was conceived as a system for letting children explore computers by writing simulations of other problems, and if you haven’t got the bit where you’re creating simulations of other problems, then you’re just writing with no goal in mind.

Taken on its own, Object-Oriented Programming is a bit of a weird approach to modularity where a package’s code is accessed through references to that package’s record structures. You’ve got polymorphism, but no guidance as to what any of the individual morphs are supposed to be, let alone a strategy for combining them effectively. You’ve got inheritance, but inheritance of what, by what? If you don’t know what the modules are supposed to look like, then deciding which of them is a submodule of other modules is definitely tricky. Similarly with encapsulation: knowing that you treat the “inside” and “outside” of modules differently doesn’t help when you don’t know where that side ought to go.

So let’s put OOP back where it belongs: embedded in an object-oriented process for simulating a problem. The process will produce as its output an executable model of the problem that produces solutions desired by…

Well, desired by whom? The answer that Extreme Programming and Agile Software Development, approaches to thinking about how to create software that were born out of a time when OOP was ascendant, would say “by the customer”: “Our highest priority is to satisfy the customer through early and continuous delivery of valuable software”, they say.

Yes, if we’re working for money then we have to satisfy the customer. They’re the ones with the money in the first place, and if they aren’t then we have bigger problems than whether our software is satisfactory. But they aren’t the only people we have to satisfy, and Object-Oriented Design helps us to understand that.

If you’ve ever seen someone who was fully bought into the Unified Modelling Language (this is true for other ways of capturing the results of Object-Oriented Analysis but let’s stick to thinking about the UML for now), then you’ll have seen a Use Case diagram. This shows you “the system” as an abstract square, with stick figures for actors connected to “the system” via arrows annotated with use cases – descriptions of what the people are trying to do with the system.

We’ve already moved past the idea that the software and the customer are the only two things in the world, indeed the customer may not be represented in this use case diagram at all! What would that mean? That the customer is paying for the software so that they can charge somebody else for use of the software: that satisfying the customer is an indirect outcome, achieved by the action of satisfying the identified actors.

We’ve also got a great idea of what the scope will be. We know who the actors are, and what they’re going to try to do; therefore we know what information they bring to the system and what sort of information they will want to get out. We also see that some of the actors are other computer systems, and we get an idea of what we have to build versus what we can outsource to others.

In principle, it doesn’t really matter how this information is gathered: the fashion used to be for prolix use case documents but these days, shorter user stories (designed to act as placeholders for a conversation where the details can be worked out) are preferred. In practice the latter is better, because it takes into account that the act of doing some work changes the understanding of the work to be done. The later decisions can be made, the more you know at the point of deciding.

On the other hand, the benefit of that UML approach where it’s all stuck on one diagram is that it makes commonalities and pattern clearer. It’s too easy to take six user stories, write them on six Github issues, then get six engineers to write six implementations, and hope that some kind of convergent design will assert itself during code review: or worse, that you’ll decide what the common design should have been in retrospective, and add an “engineering story” to the backlog to be deferred forever more.

The system, at this level of abstraction, is a black box. It’s the opaque “context” of the context diagram at the highest level of the C4 model. But that needn’t stop us thinking about how to code it! Indeed, Behaviour-Driven Development has us do exactly that. Once we’ve come to agreement over what some user story or use case means, we can encapsulate (there’s that word again) our understanding in executable form.

And now, the system-as-simulation nature of OOP finally becomes important. Because we can use those specifications-as-scripts to talk about what we need to do with the customer (“so you’re saying that given a new customer, when they put a product in their cart, they are shown a subtotal that is equal to the cost of that product?”), refining the understanding by demonstrating that understanding in executable form and inviting them to use a simulation of the final product. But because the output is an executable program that does what they want, that simulation can be the final product itself.

A common step when implementing Behaviour-Driven Development is to introduce a translation layer between “the specs” and “the production code”. So “a new user” turns into User.new.save!, and someone has to come along and write those methods (or at least inherit from ActiveRecord). Or worse, “a new user” turns into PUT /api/v0/internal/users, and someone has to both implement that and the system that backs it.

This translation step isn’t strictly true. “A new user” can be an statement in your software implementation, your specs can be both a simulation of what the system does and the system itself, you save yourself a lot of typing and a potentially faulty translation.

There’s still a lot to Object-Oriented Analysis, but it roughly follows the well-worn path of Domain-Driven Design. Where we said “a new user”, everybody on the team should agree on what a “user” is, and there should be a concept (i.e. class) in the software domain that encapsulates (that word again!) this shared understanding and models it in executable form. Where we said the user “puts a product in their cart”, we all understand that products and carts are things both in the problem and in the software, and that a user can do a thing called “putting” which involves products, and carts, in a particular way.

If it’s not clear what all those things are or how they go together, we may wish to roleplay the objects in the program (which, because the program is a simulation of the things in the problem, means roleplaying those things). Someone is a user, someone is a product, someone is a cart. What does it mean for the user to add a product to their cart? What is “their” cart? How do these things communicate, and how are they changed by the experience?

We’re starting to peer inside the black box of the “system”, so in a future post we’ll take a proper look at Object-Oriented Design.

Back to Top